I0524 21:10:20.658734 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0524 21:10:20.659018 6 e2e.go:109] Starting e2e run "c97398e3-2977-49fe-add3-364fa823d11a" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590354619 - Will randomize all specs Will run 278 of 4842 specs May 24 21:10:20.718: INFO: >>> kubeConfig: /root/.kube/config May 24 21:10:20.720: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 21:10:20.741: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 21:10:20.773: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 21:10:20.773: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 21:10:20.773: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 21:10:20.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 21:10:20.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 21:10:20.779: INFO: e2e test version: v1.17.4 May 24 21:10:20.780: INFO: kube-apiserver version: v1.17.2 May 24 21:10:20.780: INFO: >>> kubeConfig: /root/.kube/config May 24 21:10:20.785: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:10:20.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 24 21:10:20.872: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 24 21:10:20.881: INFO: Waiting up to 5m0s for pod "downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49" in namespace "downward-api-7985" to be "success or failure" May 24 21:10:20.892: INFO: Pod "downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.744217ms May 24 21:10:22.896: INFO: Pod "downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014948986s May 24 21:10:24.899: INFO: Pod "downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018177222s STEP: Saw pod success May 24 21:10:24.899: INFO: Pod "downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49" satisfied condition "success or failure" May 24 21:10:24.901: INFO: Trying to get logs from node jerma-worker2 pod downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49 container dapi-container: STEP: delete the pod May 24 21:10:24.960: INFO: Waiting for pod downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49 to disappear May 24 21:10:24.969: INFO: Pod downward-api-24e80ac1-b2dd-41ad-9971-966ce95f0a49 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:10:24.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7985" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":33,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:10:24.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-99eb4f6d-b3fc-4f19-b2b0-49ac63bf3d10 STEP: Creating a pod to test consume configMaps May 24 21:10:25.067: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d" in namespace "configmap-8279" to be "success or failure" May 24 21:10:25.088: INFO: Pod "pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.220388ms May 24 21:10:27.092: INFO: Pod "pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02519237s May 24 21:10:29.096: INFO: Pod "pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029613035s STEP: Saw pod success May 24 21:10:29.096: INFO: Pod "pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d" satisfied condition "success or failure" May 24 21:10:29.099: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d container configmap-volume-test: STEP: delete the pod May 24 21:10:29.135: INFO: Waiting for pod pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d to disappear May 24 21:10:29.143: INFO: Pod pod-configmaps-eb095058-86d2-4a1f-aaf0-058d1ac47a2d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:10:29.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8279" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":37,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:10:29.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-43f18ea7-29be-4884-9a48-246d9d45a1b8 STEP: Creating a pod to test consume secrets May 24 21:10:29.229: INFO: Waiting up to 5m0s for pod "pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803" in namespace "secrets-131" to be "success or failure" May 24 21:10:29.235: INFO: Pod "pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803": Phase="Pending", Reason="", readiness=false. Elapsed: 5.890747ms May 24 21:10:31.279: INFO: Pod "pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049520692s May 24 21:10:33.283: INFO: Pod "pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054006441s STEP: Saw pod success May 24 21:10:33.283: INFO: Pod "pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803" satisfied condition "success or failure" May 24 21:10:33.287: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803 container secret-volume-test: STEP: delete the pod May 24 21:10:33.324: INFO: Waiting for pod pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803 to disappear May 24 21:10:33.336: INFO: Pod pod-secrets-f1658a22-206f-4d7c-a0aa-7f4feec87803 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:10:33.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-131" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":52,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:10:33.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:10:33.446: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 24 21:10:33.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:33.536: INFO: Number of nodes with available pods: 0 May 24 21:10:33.536: INFO: Node jerma-worker is running more than one daemon pod May 24 21:10:34.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:34.545: INFO: Number of nodes with available pods: 0 May 24 21:10:34.545: INFO: Node jerma-worker is running more than one daemon pod May 24 21:10:35.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:35.544: INFO: Number of nodes with available pods: 0 May 24 21:10:35.544: INFO: Node jerma-worker is running more than one daemon pod May 24 21:10:36.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:36.544: INFO: Number of nodes with available pods: 0 May 24 21:10:36.544: INFO: Node jerma-worker is running more than one daemon pod May 24 21:10:37.561: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:37.564: INFO: Number of nodes with available pods: 0 May 24 21:10:37.564: INFO: Node jerma-worker is running more than one daemon pod May 24 21:10:38.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:38.543: INFO: Number of nodes with available pods: 2 May 24 21:10:38.543: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 24 21:10:38.650: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:38.650: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:38.666: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:39.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:39.671: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:39.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:40.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:40.671: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:40.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:41.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:41.670: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:41.670: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:41.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:42.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:42.671: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:42.671: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:42.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:43.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:43.670: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:43.670: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:43.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:44.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:44.671: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:44.671: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:44.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:45.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:45.670: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:45.670: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:45.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:46.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:46.670: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:46.670: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:46.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:47.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:47.670: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:47.670: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:47.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:48.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:48.671: INFO: Wrong image for pod: daemon-set-z2kn9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:48.671: INFO: Pod daemon-set-z2kn9 is not available May 24 21:10:48.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:49.677: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:49.677: INFO: Pod daemon-set-lwbq4 is not available May 24 21:10:49.706: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:50.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:50.671: INFO: Pod daemon-set-lwbq4 is not available May 24 21:10:50.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:51.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:51.670: INFO: Pod daemon-set-lwbq4 is not available May 24 21:10:51.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:52.681: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:52.685: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:53.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:53.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:54.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:54.670: INFO: Pod daemon-set-jst6q is not available May 24 21:10:54.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:55.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:55.671: INFO: Pod daemon-set-jst6q is not available May 24 21:10:55.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:56.672: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:56.672: INFO: Pod daemon-set-jst6q is not available May 24 21:10:56.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:57.671: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:57.671: INFO: Pod daemon-set-jst6q is not available May 24 21:10:57.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:58.670: INFO: Wrong image for pod: daemon-set-jst6q. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 24 21:10:58.670: INFO: Pod daemon-set-jst6q is not available May 24 21:10:58.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:59.671: INFO: Pod daemon-set-wjt8j is not available May 24 21:10:59.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 24 21:10:59.680: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:10:59.683: INFO: Number of nodes with available pods: 1 May 24 21:10:59.683: INFO: Node jerma-worker is running more than one daemon pod May 24 21:11:00.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:11:00.828: INFO: Number of nodes with available pods: 1 May 24 21:11:00.829: INFO: Node jerma-worker is running more than one daemon pod May 24 21:11:01.706: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:11:01.709: INFO: Number of nodes with available pods: 1 May 24 21:11:01.709: INFO: Node jerma-worker is running more than one daemon pod May 24 21:11:02.689: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:11:02.692: INFO: Number of nodes with available pods: 1 May 24 21:11:02.692: INFO: Node jerma-worker is running more than one daemon pod May 24 21:11:03.688: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:11:03.692: INFO: Number of nodes with available pods: 2 May 24 21:11:03.693: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9732, will wait for the garbage collector to delete the pods May 24 21:11:03.774: INFO: Deleting DaemonSet.extensions daemon-set took: 8.985249ms May 24 21:11:04.074: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.280929ms May 24 21:11:09.578: INFO: Number of nodes with available pods: 0 May 24 21:11:09.578: INFO: Number of running nodes: 0, number of available pods: 0 May 24 21:11:09.580: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9732/daemonsets","resourceVersion":"18845412"},"items":null} May 24 21:11:09.582: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9732/pods","resourceVersion":"18845412"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:09.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9732" for this suite. • [SLOW TEST:36.229 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":4,"skipped":61,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:09.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:11:09.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3" in namespace "downward-api-7859" to be "success or failure" May 24 21:11:09.698: INFO: Pod "downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.962073ms May 24 21:11:11.702: INFO: Pod "downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006973663s May 24 21:11:13.706: INFO: Pod "downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010585241s STEP: Saw pod success May 24 21:11:13.706: INFO: Pod "downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3" satisfied condition "success or failure" May 24 21:11:13.709: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3 container client-container: STEP: delete the pod May 24 21:11:13.726: INFO: Waiting for pod downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3 to disappear May 24 21:11:13.755: INFO: Pod downwardapi-volume-2971f249-53b6-4c59-a625-ceb291f6c5d3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:13.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7859" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:13.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e369bb5b-d598-419b-95e7-a9a90438db87 STEP: Creating a pod to test consume configMaps May 24 21:11:14.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654" in namespace "configmap-1032" to be "success or failure" May 24 21:11:14.127: INFO: Pod "pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654": Phase="Pending", Reason="", readiness=false. Elapsed: 33.117566ms May 24 21:11:16.132: INFO: Pod "pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037675654s May 24 21:11:18.136: INFO: Pod "pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041481835s STEP: Saw pod success May 24 21:11:18.136: INFO: Pod "pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654" satisfied condition "success or failure" May 24 21:11:18.139: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654 container configmap-volume-test: STEP: delete the pod May 24 21:11:18.324: INFO: Waiting for pod pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654 to disappear May 24 21:11:18.415: INFO: Pod pod-configmaps-9f04081d-f2b0-4344-acde-86e494e50654 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:18.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1032" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":100,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:18.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 24 21:11:18.629: INFO: Waiting up to 5m0s for pod "pod-5d82e598-4030-46bb-be65-22dab3a3bc81" in namespace "emptydir-7703" to be "success or failure" May 24 21:11:18.673: INFO: Pod "pod-5d82e598-4030-46bb-be65-22dab3a3bc81": Phase="Pending", Reason="", readiness=false. Elapsed: 43.787585ms May 24 21:11:20.765: INFO: Pod "pod-5d82e598-4030-46bb-be65-22dab3a3bc81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135956878s May 24 21:11:22.770: INFO: Pod "pod-5d82e598-4030-46bb-be65-22dab3a3bc81": Phase="Running", Reason="", readiness=true. Elapsed: 4.140657338s May 24 21:11:24.775: INFO: Pod "pod-5d82e598-4030-46bb-be65-22dab3a3bc81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145352762s STEP: Saw pod success May 24 21:11:24.775: INFO: Pod "pod-5d82e598-4030-46bb-be65-22dab3a3bc81" satisfied condition "success or failure" May 24 21:11:24.778: INFO: Trying to get logs from node jerma-worker pod pod-5d82e598-4030-46bb-be65-22dab3a3bc81 container test-container: STEP: delete the pod May 24 21:11:24.801: INFO: Waiting for pod pod-5d82e598-4030-46bb-be65-22dab3a3bc81 to disappear May 24 21:11:24.804: INFO: Pod pod-5d82e598-4030-46bb-be65-22dab3a3bc81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:24.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7703" for this suite. • [SLOW TEST:6.321 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":114,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:24.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:29.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5156" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":8,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:29.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0524 21:11:40.689595 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 21:11:40.689: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:40.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9578" for this suite. • [SLOW TEST:11.669 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":9,"skipped":156,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:40.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-834/configmap-test-6df229a4-3124-4be6-a60f-99e9614b46a7 STEP: Creating a pod to test consume configMaps May 24 21:11:41.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d" in namespace "configmap-834" to be "success or failure" May 24 21:11:41.288: INFO: Pod "pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d": Phase="Pending", Reason="", readiness=false. Elapsed: 53.788906ms May 24 21:11:43.292: INFO: Pod "pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057962464s May 24 21:11:45.297: INFO: Pod "pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063057667s STEP: Saw pod success May 24 21:11:45.297: INFO: Pod "pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d" satisfied condition "success or failure" May 24 21:11:45.300: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d container env-test: STEP: delete the pod May 24 21:11:45.541: INFO: Waiting for pod pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d to disappear May 24 21:11:45.559: INFO: Pod pod-configmaps-b8df30ec-f726-45a0-952b-0094d7b5e28d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:45.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-834" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":162,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:45.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 24 21:11:45.716: INFO: Waiting up to 5m0s for pod "var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95" in namespace "var-expansion-7408" to be "success or failure" May 24 21:11:45.727: INFO: Pod "var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36197ms May 24 21:11:47.842: INFO: Pod "var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125214951s May 24 21:11:49.847: INFO: Pod "var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130099967s May 24 21:11:51.851: INFO: Pod "var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13425577s STEP: Saw pod success May 24 21:11:51.851: INFO: Pod "var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95" satisfied condition "success or failure" May 24 21:11:51.854: INFO: Trying to get logs from node jerma-worker pod var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95 container dapi-container: STEP: delete the pod May 24 21:11:51.871: INFO: Waiting for pod var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95 to disappear May 24 21:11:51.926: INFO: Pod var-expansion-bf6977a2-40f6-49fc-8922-d510434bdc95 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:51.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7408" for this suite. • [SLOW TEST:6.366 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":162,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:51.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-c776dee8-1c80-4d42-a081-e39530521549 STEP: Creating a pod to test consume secrets May 24 21:11:52.068: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73" in namespace "projected-8897" to be "success or failure" May 24 21:11:52.087: INFO: Pod "pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73": Phase="Pending", Reason="", readiness=false. Elapsed: 18.80522ms May 24 21:11:54.090: INFO: Pod "pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022060336s May 24 21:11:56.095: INFO: Pod "pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73": Phase="Running", Reason="", readiness=true. Elapsed: 4.027107995s May 24 21:11:58.100: INFO: Pod "pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031825509s STEP: Saw pod success May 24 21:11:58.100: INFO: Pod "pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73" satisfied condition "success or failure" May 24 21:11:58.104: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73 container secret-volume-test: STEP: delete the pod May 24 21:11:58.168: INFO: Waiting for pod pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73 to disappear May 24 21:11:58.176: INFO: Pod pod-projected-secrets-df035d18-89a7-44d2-bc62-6001afea3c73 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:11:58.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8897" for this suite. • [SLOW TEST:6.251 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:11:58.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-970/secret-test-d9db3a62-e0f3-4234-92c0-efa1c12f6cf5 STEP: Creating a pod to test consume secrets May 24 21:11:58.335: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a" in namespace "secrets-970" to be "success or failure" May 24 21:11:58.338: INFO: Pod "pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.500958ms May 24 21:12:00.342: INFO: Pod "pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007501753s May 24 21:12:02.347: INFO: Pod "pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012383587s STEP: Saw pod success May 24 21:12:02.347: INFO: Pod "pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a" satisfied condition "success or failure" May 24 21:12:02.351: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a container env-test: STEP: delete the pod May 24 21:12:02.437: INFO: Waiting for pod pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a to disappear May 24 21:12:02.467: INFO: Pod pod-configmaps-ea5b3f0c-db32-4a75-a921-5cb651b8368a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:12:02.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-970" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":216,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:12:02.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 24 21:12:03.296: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 24 21:12:05.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951523, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951523, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951523, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951523, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:12:08.344: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:12:08.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:12:09.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6854" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.149 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":14,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:12:09.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:12:22.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6565" for this suite. • [SLOW TEST:13.189 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":15,"skipped":240,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:12:22.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-8e78905d-ffd4-4c21-8ca4-bea939866ed6 STEP: Creating a pod to test consume secrets May 24 21:12:22.934: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f" in namespace "projected-6158" to be "success or failure" May 24 21:12:22.948: INFO: Pod "pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.860084ms May 24 21:12:24.965: INFO: Pod "pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031320034s May 24 21:12:26.969: INFO: Pod "pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f": Phase="Running", Reason="", readiness=true. Elapsed: 4.03520369s May 24 21:12:28.973: INFO: Pod "pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039592492s STEP: Saw pod success May 24 21:12:28.973: INFO: Pod "pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f" satisfied condition "success or failure" May 24 21:12:28.976: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f container projected-secret-volume-test: STEP: delete the pod May 24 21:12:28.994: INFO: Waiting for pod pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f to disappear May 24 21:12:28.998: INFO: Pod pod-projected-secrets-aa9117d9-3456-4e5b-af94-c6556a020f6f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:12:28.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6158" for this suite. • [SLOW TEST:6.153 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":247,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:12:29.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 24 21:12:33.246: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1360 PodName:pod-sharedvolume-aef07faf-1a9c-423a-a090-d940cb93cc2b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:12:33.246: INFO: >>> kubeConfig: /root/.kube/config I0524 21:12:33.275071 6 log.go:172] (0xc00314c210) (0xc002721040) Create stream I0524 21:12:33.275104 6 log.go:172] (0xc00314c210) (0xc002721040) Stream added, broadcasting: 1 I0524 21:12:33.286755 6 log.go:172] (0xc00314c210) Reply frame received for 1 I0524 21:12:33.286791 6 log.go:172] (0xc00314c210) (0xc0027210e0) Create stream I0524 21:12:33.286804 6 log.go:172] (0xc00314c210) (0xc0027210e0) Stream added, broadcasting: 3 I0524 21:12:33.287802 6 log.go:172] (0xc00314c210) Reply frame received for 3 I0524 21:12:33.287841 6 log.go:172] (0xc00314c210) (0xc002200c80) Create stream I0524 21:12:33.287857 6 log.go:172] (0xc00314c210) (0xc002200c80) Stream added, broadcasting: 5 I0524 21:12:33.288650 6 log.go:172] (0xc00314c210) Reply frame received for 5 I0524 21:12:33.341450 6 log.go:172] (0xc00314c210) Data frame received for 5 I0524 21:12:33.341513 6 log.go:172] (0xc002200c80) (5) Data frame handling I0524 21:12:33.341555 6 log.go:172] (0xc00314c210) Data frame received for 3 I0524 21:12:33.341580 6 log.go:172] (0xc0027210e0) (3) Data frame handling I0524 21:12:33.341616 6 log.go:172] (0xc0027210e0) (3) Data frame sent I0524 21:12:33.341631 6 log.go:172] (0xc00314c210) Data frame received for 3 I0524 21:12:33.341643 6 log.go:172] (0xc0027210e0) (3) Data frame handling I0524 21:12:33.343143 6 log.go:172] (0xc00314c210) Data frame received for 1 I0524 21:12:33.343167 6 log.go:172] (0xc002721040) (1) Data frame handling I0524 21:12:33.343177 6 log.go:172] (0xc002721040) (1) Data frame sent I0524 21:12:33.343186 6 log.go:172] (0xc00314c210) (0xc002721040) Stream removed, broadcasting: 1 I0524 21:12:33.343220 6 log.go:172] (0xc00314c210) Go away received I0524 21:12:33.343436 6 log.go:172] (0xc00314c210) (0xc002721040) Stream removed, broadcasting: 1 I0524 21:12:33.343447 6 log.go:172] (0xc00314c210) (0xc0027210e0) Stream removed, broadcasting: 3 I0524 21:12:33.343452 6 log.go:172] (0xc00314c210) (0xc002200c80) Stream removed, broadcasting: 5 May 24 21:12:33.343: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:12:33.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1360" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":17,"skipped":261,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:12:33.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2052 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2052 STEP: Deleting pre-stop pod May 24 21:12:46.480: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:12:46.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2052" for this suite. • [SLOW TEST:13.202 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":18,"skipped":273,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:12:46.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:12:46.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 24 21:12:47.431: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T21:12:47Z generation:1 name:name1 resourceVersion:18846292 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d2893bde-129f-4a9d-b39b-faea54f117cf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 24 21:12:57.436: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T21:12:57Z generation:1 name:name2 resourceVersion:18846339 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2dd81e98-eea1-44b9-99b4-0a61104017fa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 24 21:13:07.443: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T21:12:47Z generation:2 name:name1 resourceVersion:18846369 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d2893bde-129f-4a9d-b39b-faea54f117cf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 24 21:13:17.448: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T21:12:57Z generation:2 name:name2 resourceVersion:18846401 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2dd81e98-eea1-44b9-99b4-0a61104017fa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 24 21:13:27.456: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T21:12:47Z generation:2 name:name1 resourceVersion:18846433 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d2893bde-129f-4a9d-b39b-faea54f117cf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 24 21:13:37.463: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T21:12:57Z generation:2 name:name2 resourceVersion:18846465 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2dd81e98-eea1-44b9-99b4-0a61104017fa] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:13:47.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-764" for this suite. • [SLOW TEST:61.429 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":19,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:13:47.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 24 21:13:48.114: INFO: Waiting up to 5m0s for pod "pod-ed9ecc77-281c-4e7f-b5da-91f59266193d" in namespace "emptydir-1806" to be "success or failure" May 24 21:13:48.137: INFO: Pod "pod-ed9ecc77-281c-4e7f-b5da-91f59266193d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.255512ms May 24 21:13:50.142: INFO: Pod "pod-ed9ecc77-281c-4e7f-b5da-91f59266193d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027462135s May 24 21:13:52.146: INFO: Pod "pod-ed9ecc77-281c-4e7f-b5da-91f59266193d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031687903s STEP: Saw pod success May 24 21:13:52.146: INFO: Pod "pod-ed9ecc77-281c-4e7f-b5da-91f59266193d" satisfied condition "success or failure" May 24 21:13:52.149: INFO: Trying to get logs from node jerma-worker2 pod pod-ed9ecc77-281c-4e7f-b5da-91f59266193d container test-container: STEP: delete the pod May 24 21:13:52.223: INFO: Waiting for pod pod-ed9ecc77-281c-4e7f-b5da-91f59266193d to disappear May 24 21:13:52.226: INFO: Pod pod-ed9ecc77-281c-4e7f-b5da-91f59266193d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:13:52.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1806" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":307,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:13:52.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 24 21:13:52.283: INFO: Waiting up to 5m0s for pod "pod-a278f2df-005c-4a63-a265-7c9bc75cd9df" in namespace "emptydir-5754" to be "success or failure" May 24 21:13:52.287: INFO: Pod "pod-a278f2df-005c-4a63-a265-7c9bc75cd9df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.361114ms May 24 21:13:54.292: INFO: Pod "pod-a278f2df-005c-4a63-a265-7c9bc75cd9df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008043358s May 24 21:13:56.295: INFO: Pod "pod-a278f2df-005c-4a63-a265-7c9bc75cd9df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011623129s STEP: Saw pod success May 24 21:13:56.295: INFO: Pod "pod-a278f2df-005c-4a63-a265-7c9bc75cd9df" satisfied condition "success or failure" May 24 21:13:56.298: INFO: Trying to get logs from node jerma-worker pod pod-a278f2df-005c-4a63-a265-7c9bc75cd9df container test-container: STEP: delete the pod May 24 21:13:56.325: INFO: Waiting for pod pod-a278f2df-005c-4a63-a265-7c9bc75cd9df to disappear May 24 21:13:56.329: INFO: Pod pod-a278f2df-005c-4a63-a265-7c9bc75cd9df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:13:56.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5754" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":309,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:13:56.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-5671 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5671 to expose endpoints map[] May 24 21:13:56.563: INFO: Get endpoints failed (2.908784ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 24 21:13:57.567: INFO: successfully validated that service multi-endpoint-test in namespace services-5671 exposes endpoints map[] (1.006291388s elapsed) STEP: Creating pod pod1 in namespace services-5671 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5671 to expose endpoints map[pod1:[100]] May 24 21:14:00.645: INFO: successfully validated that service multi-endpoint-test in namespace services-5671 exposes endpoints map[pod1:[100]] (3.07176663s elapsed) STEP: Creating pod pod2 in namespace services-5671 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5671 to expose endpoints map[pod1:[100] pod2:[101]] May 24 21:14:03.867: INFO: successfully validated that service multi-endpoint-test in namespace services-5671 exposes endpoints map[pod1:[100] pod2:[101]] (3.213134458s elapsed) STEP: Deleting pod pod1 in namespace services-5671 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5671 to expose endpoints map[pod2:[101]] May 24 21:14:04.913: INFO: successfully validated that service multi-endpoint-test in namespace services-5671 exposes endpoints map[pod2:[101]] (1.042284831s elapsed) STEP: Deleting pod pod2 in namespace services-5671 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5671 to expose endpoints map[] May 24 21:14:05.927: INFO: successfully validated that service multi-endpoint-test in namespace services-5671 exposes endpoints map[] (1.009151039s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:14:06.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5671" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.805 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":22,"skipped":319,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:14:06.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:14:13.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5372" for this suite. • [SLOW TEST:7.092 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":23,"skipped":325,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:14:13.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:14:13.304: INFO: Creating deployment "webserver-deployment" May 24 21:14:13.308: INFO: Waiting for observed generation 1 May 24 21:14:15.360: INFO: Waiting for all required pods to come up May 24 21:14:15.422: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 24 21:14:25.433: INFO: Waiting for deployment "webserver-deployment" to complete May 24 21:14:25.440: INFO: Updating deployment "webserver-deployment" with a non-existent image May 24 21:14:25.446: INFO: Updating deployment webserver-deployment May 24 21:14:25.446: INFO: Waiting for observed generation 2 May 24 21:14:27.460: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 24 21:14:27.463: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 24 21:14:27.465: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 24 21:14:27.471: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 24 21:14:27.471: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 24 21:14:27.472: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 24 21:14:27.475: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 24 21:14:27.475: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 24 21:14:27.479: INFO: Updating deployment webserver-deployment May 24 21:14:27.479: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 24 21:14:27.492: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 24 21:14:27.522: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 24 21:14:28.273: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1024 /apis/apps/v1/namespaces/deployment-1024/deployments/webserver-deployment f43f9df0-07f2-4106-83ce-f8e4ac40649e 18846916 3 2020-05-24 21:14:13 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002589c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-24 21:14:26 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-24 21:14:27 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 24 21:14:28.428: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-1024 /apis/apps/v1/namespaces/deployment-1024/replicasets/webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 18846959 3 2020-05-24 21:14:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f43f9df0-07f2-4106-83ce-f8e4ac40649e 0xc00064efc7 0xc00064efc8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00064f0b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 21:14:28.428: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 24 21:14:28.428: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-1024 /apis/apps/v1/namespaces/deployment-1024/replicasets/webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 18846958 3 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f43f9df0-07f2-4106-83ce-f8e4ac40649e 0xc00064ee17 0xc00064ee18}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00064eef8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 24 21:14:28.546: INFO: Pod "webserver-deployment-595b5b9587-2hpjc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2hpjc webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-2hpjc 42f7b1dd-a880-4070-9c83-b21ee4e0cb87 18846794 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc00064f9b7 0xc00064f9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.63,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://39953ca7929460e97ebd4a48aba809cfb1a03abdc9e63b89a57ce98e6f42a80f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.547: INFO: Pod "webserver-deployment-595b5b9587-4ln2s" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4ln2s webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-4ln2s d7b1ae0c-336c-44e6-bf2d-023dc8800fbf 18846831 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc00064fc57 0xc00064fc58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.123,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f115ba7b54fdda4c27b0f25b7d252e6ed15590f98a8740506b39cac89fd7776b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.547: INFO: Pod "webserver-deployment-595b5b9587-4w7wb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4w7wb webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-4w7wb 441ceb14-553d-4058-ac5a-cc035900f9ff 18846963 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000870117 0xc000870118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.547: INFO: Pod "webserver-deployment-595b5b9587-5zg86" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5zg86 webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-5zg86 758895dd-bd17-4ba1-8d8f-8b96f8ca49e1 18846809 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000870a27 0xc000870a28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.64,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2ddae088a1986e986258589c6955214fbe78bd3f72ba1363a70c338a13424425,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.547: INFO: Pod "webserver-deployment-595b5b9587-85xz7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-85xz7 webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-85xz7 9745e5a1-8dc4-46e4-b55a-4f6d30766b1b 18846957 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000870df7 0xc000870df8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-24 21:14:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.547: INFO: Pod "webserver-deployment-595b5b9587-9t9rs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9t9rs webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-9t9rs e7095287-5115-4745-8208-97dfba3878a6 18846940 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000871097 0xc000871098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.547: INFO: Pod "webserver-deployment-595b5b9587-c6xjc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c6xjc webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-c6xjc c246fb94-47db-4c23-b265-5fe31e1716a4 18846835 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000871357 0xc000871358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.122,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://87dd52235d3ea7adac231916ee2582edc75389cae5c751cf35f253ed28855273,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.122,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.547: INFO: Pod "webserver-deployment-595b5b9587-dr46q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dr46q webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-dr46q ad9ebb2b-2963-49fb-a37d-f6c2da03638e 18846956 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000871707 0xc000871708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.548: INFO: Pod "webserver-deployment-595b5b9587-f79br" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f79br webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-f79br aec7cd21-0a0a-4fa1-80a7-463e22db5714 18846986 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000871967 0xc000871968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-24 21:14:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.548: INFO: Pod "webserver-deployment-595b5b9587-fpfnd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fpfnd webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-fpfnd f555a523-d01f-4104-8adc-beaeb773e7fd 18846955 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000871cf7 0xc000871cf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.548: INFO: Pod "webserver-deployment-595b5b9587-jqpz9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jqpz9 webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-jqpz9 6303cc25-52e1-41d0-9d2e-42636a5b7b8c 18846829 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000871fe7 0xc000871fe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.121,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://50eeff97c8ad8d09dd4329444f9aeb4d6442a04e4c6a74d6cb25b1ca3910915c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.548: INFO: Pod "webserver-deployment-595b5b9587-mlq6z" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mlq6z webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-mlq6z 8f358aa1-a9b1-403f-9b3d-804c228e16d9 18846785 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000813087 0xc000813088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.62,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8349f61091dfe34efcda01f47f38f2c267b7de7ca31dc1d53a203a339daf5753,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.548: INFO: Pod "webserver-deployment-595b5b9587-nnk9s" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nnk9s webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-nnk9s 9e4ac1c4-aa2b-4b0b-8260-f752ebc4af02 18846764 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000813517 0xc000813518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.61,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a6f81314132a83ce08cbfaf2b4ac6854b3f8a33de190c0a50ac706062e87a8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.548: INFO: Pod "webserver-deployment-595b5b9587-p87wq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p87wq webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-p87wq 95dea46d-d311-4223-8e9a-f8f7a4e6e01f 18846962 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000813a17 0xc000813a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.548: INFO: Pod "webserver-deployment-595b5b9587-p8hlg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p8hlg webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-p8hlg 98d53d68-39b1-4de6-8623-5ebebf675c7f 18846961 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000813c67 0xc000813c68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.549: INFO: Pod "webserver-deployment-595b5b9587-rhvcl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rhvcl webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-rhvcl 4a70606f-95fd-4a5a-8981-ad0ac58a0c0b 18846934 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000813d87 0xc000813d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.549: INFO: Pod "webserver-deployment-595b5b9587-rx8jf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rx8jf webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-rx8jf 0a7e6534-a6f3-4570-8dae-fa2aa3053447 18846926 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000813ee7 0xc000813ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.549: INFO: Pod "webserver-deployment-595b5b9587-vcpjp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vcpjp webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-vcpjp 0943af5a-0438-473b-9ce0-798e9d4df11f 18846936 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000e9e1e7 0xc000e9e1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.549: INFO: Pod "webserver-deployment-595b5b9587-wzfnz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wzfnz webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-wzfnz 5b9263ac-dce3-4151-ba60-f554b2130cb6 18846806 0 2020-05-24 21:14:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000e9e427 0xc000e9e428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.65,StartTime:2020-05-24 21:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:14:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c7caabed39f7e0590532fd354a4788c3770754d4e899fbc8e559bcf6c32cd185,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.549: INFO: Pod "webserver-deployment-595b5b9587-xbntl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xbntl webserver-deployment-595b5b9587- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-595b5b9587-xbntl fb8102fe-a640-4728-91ed-835931915844 18846937 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c3359278-f72c-49ef-bebf-3aa32d0881d0 0xc000e9e6d7 0xc000e9e6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.549: INFO: Pod "webserver-deployment-c7997dcc8-64n6d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-64n6d webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-64n6d 4b424804-d135-470b-87d6-4435d7abfb99 18846897 0 2020-05-24 21:14:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9e9b7 0xc000e9e9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-24 21:14:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.550: INFO: Pod "webserver-deployment-c7997dcc8-6tcqt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6tcqt webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-6tcqt e5e56630-a974-4ad0-ac61-cc824a1ac8d7 18846984 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9ec17 0xc000e9ec18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-24 21:14:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.550: INFO: Pod "webserver-deployment-c7997dcc8-6xdhk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6xdhk webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-6xdhk c32e96dd-9911-424e-ab72-526772e89efb 18846869 0 2020-05-24 21:14:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9efd7 0xc000e9efd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-24 21:14:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.550: INFO: Pod "webserver-deployment-c7997dcc8-6zpjt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6zpjt webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-6zpjt 65e5d3b5-737f-4913-9ab8-77c023b0d564 18846946 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9f187 0xc000e9f188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.550: INFO: Pod "webserver-deployment-c7997dcc8-fqwz6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fqwz6 webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-fqwz6 78272d59-fddd-48ab-a1a5-4b3f33986975 18846968 0 2020-05-24 21:14:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9f467 0xc000e9f468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.550: INFO: Pod "webserver-deployment-c7997dcc8-ln44s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ln44s webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-ln44s b97bdc19-b3a1-4adc-9612-ca4836bf1944 18846967 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9f5a7 0xc000e9f5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.550: INFO: Pod "webserver-deployment-c7997dcc8-m5bbn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m5bbn webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-m5bbn 430622cd-bb2e-4830-b114-2f81b86e28c6 18846885 0 2020-05-24 21:14:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9f6d7 0xc000e9f6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-24 21:14:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.550: INFO: Pod "webserver-deployment-c7997dcc8-n4867" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n4867 webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-n4867 d7e144e0-f8cd-459b-8c76-8fe7343f8bd9 18846944 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9f857 0xc000e9f858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.551: INFO: Pod "webserver-deployment-c7997dcc8-nrdmx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nrdmx webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-nrdmx 6da0af27-beba-481b-95c2-fe36595613ab 18846966 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9fb97 0xc000e9fb98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.551: INFO: Pod "webserver-deployment-c7997dcc8-rg772" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rg772 webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-rg772 7c95cf4e-5118-422c-8bc8-e4cf01338d54 18846965 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000e9ff57 0xc000e9ff58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.551: INFO: Pod "webserver-deployment-c7997dcc8-t9wsb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t9wsb webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-t9wsb 22f7cbbd-3fc6-45b6-812b-c16b31f68e99 18846901 0 2020-05-24 21:14:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc0006001c7 0xc0006001c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-24 21:14:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.551: INFO: Pod "webserver-deployment-c7997dcc8-wcb5q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wcb5q webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-wcb5q 97d65d5f-05cd-4970-a972-68f9795adc8f 18846964 0 2020-05-24 21:14:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc0006007e7 0xc0006007e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:14:28.551: INFO: Pod "webserver-deployment-c7997dcc8-wfr5p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wfr5p webserver-deployment-c7997dcc8- deployment-1024 /api/v1/namespaces/deployment-1024/pods/webserver-deployment-c7997dcc8-wfr5p a7b20ec8-0b03-41eb-8a22-f3aaf5144538 18846870 0 2020-05-24 21:14:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f16d1703-74ec-4352-a01f-9a20e251de19 0xc000600957 0xc000600958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tqd6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tqd6j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tqd6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:14:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-24 21:14:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:14:28.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1024" for this suite. • [SLOW TEST:15.476 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":24,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:14:28.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:14:32.883: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:14:35.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951673, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:14:37.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951673, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:14:39.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951673, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:14:42.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951673, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:14:44.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951673, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:14:46.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951673, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951672, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:14:49.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:14:52.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1132" for this suite. STEP: Destroying namespace "webhook-1132-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.470 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":25,"skipped":351,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:14:53.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 24 21:14:53.684: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:01.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6484" for this suite. • [SLOW TEST:7.864 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":26,"skipped":353,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:01.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0524 21:15:02.284486 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 21:15:02.284: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:02.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6432" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":27,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:02.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 24 21:15:02.393: INFO: Waiting up to 5m0s for pod "var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14" in namespace "var-expansion-915" to be "success or failure" May 24 21:15:02.398: INFO: Pod "var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715226ms May 24 21:15:04.402: INFO: Pod "var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008813265s May 24 21:15:06.406: INFO: Pod "var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012811992s May 24 21:15:08.412: INFO: Pod "var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018031334s STEP: Saw pod success May 24 21:15:08.412: INFO: Pod "var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14" satisfied condition "success or failure" May 24 21:15:08.415: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14 container dapi-container: STEP: delete the pod May 24 21:15:08.441: INFO: Waiting for pod var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14 to disappear May 24 21:15:08.462: INFO: Pod var-expansion-81734577-48ea-4176-bd07-46ac8b7bae14 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:08.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-915" for this suite. • [SLOW TEST:6.177 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":376,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:08.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 24 21:15:08.528: INFO: Waiting up to 5m0s for pod "pod-fe099c98-44c8-475f-93d2-76027fb6092f" in namespace "emptydir-4219" to be "success or failure" May 24 21:15:08.532: INFO: Pod "pod-fe099c98-44c8-475f-93d2-76027fb6092f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.663197ms May 24 21:15:10.536: INFO: Pod "pod-fe099c98-44c8-475f-93d2-76027fb6092f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008055272s May 24 21:15:12.547: INFO: Pod "pod-fe099c98-44c8-475f-93d2-76027fb6092f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01862291s STEP: Saw pod success May 24 21:15:12.547: INFO: Pod "pod-fe099c98-44c8-475f-93d2-76027fb6092f" satisfied condition "success or failure" May 24 21:15:12.550: INFO: Trying to get logs from node jerma-worker2 pod pod-fe099c98-44c8-475f-93d2-76027fb6092f container test-container: STEP: delete the pod May 24 21:15:12.580: INFO: Waiting for pod pod-fe099c98-44c8-475f-93d2-76027fb6092f to disappear May 24 21:15:12.590: INFO: Pod pod-fe099c98-44c8-475f-93d2-76027fb6092f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:12.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4219" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:12.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:15:12.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c" in namespace "projected-8823" to be "success or failure" May 24 21:15:12.914: INFO: Pod "downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.475303ms May 24 21:15:14.919: INFO: Pod "downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012511029s May 24 21:15:16.923: INFO: Pod "downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01664131s STEP: Saw pod success May 24 21:15:16.923: INFO: Pod "downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c" satisfied condition "success or failure" May 24 21:15:16.927: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c container client-container: STEP: delete the pod May 24 21:15:16.964: INFO: Waiting for pod downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c to disappear May 24 21:15:16.974: INFO: Pod downwardapi-volume-6e9304e2-9b41-46ff-abbf-2449f4020c4c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:16.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8823" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:16.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:15:18.040: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:15:20.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:15:22.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:15:25.157: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 24 21:15:31.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-6714 to-be-attached-pod -i -c=container1' May 24 21:15:33.819: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:33.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6714" for this suite. STEP: Destroying namespace "webhook-6714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.945 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":31,"skipped":472,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:33.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 24 21:15:33.994: INFO: Waiting up to 5m0s for pod "downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4" in namespace "downward-api-9101" to be "success or failure" May 24 21:15:34.011: INFO: Pod "downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.823809ms May 24 21:15:36.044: INFO: Pod "downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049761008s May 24 21:15:38.049: INFO: Pod "downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054774943s STEP: Saw pod success May 24 21:15:38.049: INFO: Pod "downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4" satisfied condition "success or failure" May 24 21:15:38.052: INFO: Trying to get logs from node jerma-worker pod downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4 container dapi-container: STEP: delete the pod May 24 21:15:38.088: INFO: Waiting for pod downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4 to disappear May 24 21:15:38.134: INFO: Pod downward-api-265dd6b3-f921-4ffd-8d3d-dbc9065c6bb4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:38.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9101" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":480,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:38.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4315.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4315.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4315.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4315.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 21:15:44.245: INFO: DNS probes using dns-test-cbe6cd61-a543-46ad-8cda-25bf18b92fc7 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4315.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4315.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4315.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4315.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 21:15:52.471: INFO: DNS probes using dns-test-357ebc69-2d22-402c-95cc-eedc39498e34 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4315.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4315.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4315.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4315.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 21:15:59.015: INFO: DNS probes using dns-test-452bd2ff-ef60-4f45-9786-d6c2ac506374 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:15:59.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4315" for this suite. • [SLOW TEST:20.970 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":33,"skipped":488,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:15:59.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:15:59.514: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6321 I0524 21:15:59.539560 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6321, replica count: 1 I0524 21:16:00.589940 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:16:01.590200 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:16:02.590381 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:16:03.590595 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 21:16:03.739: INFO: Created: latency-svc-mjb9b May 24 21:16:03.755: INFO: Got endpoints: latency-svc-mjb9b [64.682949ms] May 24 21:16:03.853: INFO: Created: latency-svc-478jb May 24 21:16:03.869: INFO: Got endpoints: latency-svc-478jb [114.210945ms] May 24 21:16:03.955: INFO: Created: latency-svc-v5shc May 24 21:16:04.004: INFO: Got endpoints: latency-svc-v5shc [248.516442ms] May 24 21:16:04.004: INFO: Created: latency-svc-xpft8 May 24 21:16:04.012: INFO: Got endpoints: latency-svc-xpft8 [256.584792ms] May 24 21:16:04.104: INFO: Created: latency-svc-4b69d May 24 21:16:04.108: INFO: Got endpoints: latency-svc-4b69d [352.546774ms] May 24 21:16:04.202: INFO: Created: latency-svc-qk68w May 24 21:16:04.267: INFO: Got endpoints: latency-svc-qk68w [511.415156ms] May 24 21:16:04.271: INFO: Created: latency-svc-bgh7c May 24 21:16:04.278: INFO: Got endpoints: latency-svc-bgh7c [522.940416ms] May 24 21:16:04.305: INFO: Created: latency-svc-6mmns May 24 21:16:04.357: INFO: Got endpoints: latency-svc-6mmns [601.84452ms] May 24 21:16:04.429: INFO: Created: latency-svc-7xml7 May 24 21:16:04.447: INFO: Got endpoints: latency-svc-7xml7 [691.166838ms] May 24 21:16:04.472: INFO: Created: latency-svc-6bcvh May 24 21:16:04.488: INFO: Got endpoints: latency-svc-6bcvh [733.11063ms] May 24 21:16:04.559: INFO: Created: latency-svc-h2pgx May 24 21:16:04.592: INFO: Got endpoints: latency-svc-h2pgx [836.471535ms] May 24 21:16:04.639: INFO: Created: latency-svc-xqp9p May 24 21:16:04.657: INFO: Got endpoints: latency-svc-xqp9p [902.01415ms] May 24 21:16:04.709: INFO: Created: latency-svc-k7tn5 May 24 21:16:04.712: INFO: Got endpoints: latency-svc-k7tn5 [956.561581ms] May 24 21:16:04.767: INFO: Created: latency-svc-xrf28 May 24 21:16:04.782: INFO: Got endpoints: latency-svc-xrf28 [1.026747405s] May 24 21:16:04.809: INFO: Created: latency-svc-9w5w2 May 24 21:16:04.879: INFO: Got endpoints: latency-svc-9w5w2 [1.123158613s] May 24 21:16:04.879: INFO: Created: latency-svc-rx4gs May 24 21:16:04.890: INFO: Got endpoints: latency-svc-rx4gs [1.134246234s] May 24 21:16:04.909: INFO: Created: latency-svc-4fqpg May 24 21:16:04.920: INFO: Got endpoints: latency-svc-4fqpg [1.050332681s] May 24 21:16:04.940: INFO: Created: latency-svc-q4zpc May 24 21:16:05.009: INFO: Got endpoints: latency-svc-q4zpc [1.005438163s] May 24 21:16:05.011: INFO: Created: latency-svc-smjf8 May 24 21:16:05.018: INFO: Got endpoints: latency-svc-smjf8 [1.006427439s] May 24 21:16:05.065: INFO: Created: latency-svc-jh7kj May 24 21:16:05.085: INFO: Got endpoints: latency-svc-jh7kj [977.323915ms] May 24 21:16:05.170: INFO: Created: latency-svc-jthbd May 24 21:16:05.175: INFO: Got endpoints: latency-svc-jthbd [908.011929ms] May 24 21:16:05.198: INFO: Created: latency-svc-cbxsj May 24 21:16:05.211: INFO: Got endpoints: latency-svc-cbxsj [933.075139ms] May 24 21:16:05.233: INFO: Created: latency-svc-zwcmx May 24 21:16:05.248: INFO: Got endpoints: latency-svc-zwcmx [890.727824ms] May 24 21:16:05.270: INFO: Created: latency-svc-4qf6k May 24 21:16:05.308: INFO: Got endpoints: latency-svc-4qf6k [861.579554ms] May 24 21:16:05.318: INFO: Created: latency-svc-2pmsv May 24 21:16:05.332: INFO: Got endpoints: latency-svc-2pmsv [843.742113ms] May 24 21:16:05.354: INFO: Created: latency-svc-d9x7p May 24 21:16:05.383: INFO: Got endpoints: latency-svc-d9x7p [790.963618ms] May 24 21:16:05.453: INFO: Created: latency-svc-pfs84 May 24 21:16:05.456: INFO: Got endpoints: latency-svc-pfs84 [798.459238ms] May 24 21:16:05.504: INFO: Created: latency-svc-jcl9g May 24 21:16:05.519: INFO: Got endpoints: latency-svc-jcl9g [806.54159ms] May 24 21:16:05.547: INFO: Created: latency-svc-b4w9c May 24 21:16:05.583: INFO: Got endpoints: latency-svc-b4w9c [801.075736ms] May 24 21:16:05.594: INFO: Created: latency-svc-rgdm4 May 24 21:16:05.628: INFO: Got endpoints: latency-svc-rgdm4 [749.857509ms] May 24 21:16:05.665: INFO: Created: latency-svc-ttv4s May 24 21:16:05.715: INFO: Got endpoints: latency-svc-ttv4s [825.284616ms] May 24 21:16:05.756: INFO: Created: latency-svc-cb2q4 May 24 21:16:05.770: INFO: Got endpoints: latency-svc-cb2q4 [849.89952ms] May 24 21:16:05.804: INFO: Created: latency-svc-5bcqw May 24 21:16:05.877: INFO: Got endpoints: latency-svc-5bcqw [868.032205ms] May 24 21:16:05.886: INFO: Created: latency-svc-bk9bf May 24 21:16:05.929: INFO: Got endpoints: latency-svc-bk9bf [910.431075ms] May 24 21:16:05.960: INFO: Created: latency-svc-4w8wn May 24 21:16:06.056: INFO: Got endpoints: latency-svc-4w8wn [971.387641ms] May 24 21:16:06.058: INFO: Created: latency-svc-mg429 May 24 21:16:06.119: INFO: Got endpoints: latency-svc-mg429 [943.887492ms] May 24 21:16:06.230: INFO: Created: latency-svc-6txjl May 24 21:16:06.256: INFO: Got endpoints: latency-svc-6txjl [1.045159862s] May 24 21:16:06.312: INFO: Created: latency-svc-m7c4q May 24 21:16:06.328: INFO: Got endpoints: latency-svc-m7c4q [1.08023538s] May 24 21:16:06.452: INFO: Created: latency-svc-bxqhl May 24 21:16:06.467: INFO: Got endpoints: latency-svc-bxqhl [1.159032102s] May 24 21:16:06.537: INFO: Created: latency-svc-6h55d May 24 21:16:06.537: INFO: Got endpoints: latency-svc-6h55d [1.204589224s] May 24 21:16:06.564: INFO: Created: latency-svc-x4dhv May 24 21:16:06.575: INFO: Got endpoints: latency-svc-x4dhv [1.192081651s] May 24 21:16:06.607: INFO: Created: latency-svc-2dgfl May 24 21:16:06.623: INFO: Got endpoints: latency-svc-2dgfl [1.167340612s] May 24 21:16:06.661: INFO: Created: latency-svc-xkdq4 May 24 21:16:06.671: INFO: Got endpoints: latency-svc-xkdq4 [1.152729523s] May 24 21:16:06.698: INFO: Created: latency-svc-6frnp May 24 21:16:06.714: INFO: Got endpoints: latency-svc-6frnp [1.130429761s] May 24 21:16:06.752: INFO: Created: latency-svc-btvtb May 24 21:16:06.794: INFO: Got endpoints: latency-svc-btvtb [1.165013943s] May 24 21:16:06.828: INFO: Created: latency-svc-kn59r May 24 21:16:06.859: INFO: Got endpoints: latency-svc-kn59r [1.143495738s] May 24 21:16:06.944: INFO: Created: latency-svc-tmrjd May 24 21:16:06.955: INFO: Got endpoints: latency-svc-tmrjd [1.185531272s] May 24 21:16:06.980: INFO: Created: latency-svc-czjv2 May 24 21:16:06.996: INFO: Got endpoints: latency-svc-czjv2 [1.11907054s] May 24 21:16:07.014: INFO: Created: latency-svc-j6gfj May 24 21:16:07.027: INFO: Got endpoints: latency-svc-j6gfj [71.111982ms] May 24 21:16:07.088: INFO: Created: latency-svc-4lbcd May 24 21:16:07.096: INFO: Got endpoints: latency-svc-4lbcd [1.166534267s] May 24 21:16:07.135: INFO: Created: latency-svc-vsdrt May 24 21:16:07.157: INFO: Got endpoints: latency-svc-vsdrt [1.100581997s] May 24 21:16:07.224: INFO: Created: latency-svc-bktmm May 24 21:16:07.234: INFO: Got endpoints: latency-svc-bktmm [1.115253108s] May 24 21:16:07.256: INFO: Created: latency-svc-mnrj5 May 24 21:16:07.279: INFO: Got endpoints: latency-svc-mnrj5 [1.022034754s] May 24 21:16:07.320: INFO: Created: latency-svc-w6dkk May 24 21:16:07.362: INFO: Got endpoints: latency-svc-w6dkk [1.033634343s] May 24 21:16:07.387: INFO: Created: latency-svc-tzpvk May 24 21:16:07.403: INFO: Got endpoints: latency-svc-tzpvk [935.960457ms] May 24 21:16:07.423: INFO: Created: latency-svc-7v7v7 May 24 21:16:07.440: INFO: Got endpoints: latency-svc-7v7v7 [902.786211ms] May 24 21:16:07.511: INFO: Created: latency-svc-f5lqr May 24 21:16:07.523: INFO: Got endpoints: latency-svc-f5lqr [948.315605ms] May 24 21:16:07.549: INFO: Created: latency-svc-rfm6b May 24 21:16:07.572: INFO: Got endpoints: latency-svc-rfm6b [948.841511ms] May 24 21:16:07.667: INFO: Created: latency-svc-k7x8v May 24 21:16:07.674: INFO: Got endpoints: latency-svc-k7x8v [1.002695121s] May 24 21:16:07.705: INFO: Created: latency-svc-x78mc May 24 21:16:07.722: INFO: Got endpoints: latency-svc-x78mc [1.008661849s] May 24 21:16:07.753: INFO: Created: latency-svc-8ppsr May 24 21:16:07.765: INFO: Got endpoints: latency-svc-8ppsr [971.229159ms] May 24 21:16:07.849: INFO: Created: latency-svc-fbxw4 May 24 21:16:07.856: INFO: Got endpoints: latency-svc-fbxw4 [997.140081ms] May 24 21:16:07.897: INFO: Created: latency-svc-zx7vn May 24 21:16:07.931: INFO: Got endpoints: latency-svc-zx7vn [934.108997ms] May 24 21:16:07.939: INFO: Created: latency-svc-kxd2x May 24 21:16:07.981: INFO: Got endpoints: latency-svc-kxd2x [954.74382ms] May 24 21:16:08.024: INFO: Created: latency-svc-kqhwx May 24 21:16:08.062: INFO: Got endpoints: latency-svc-kqhwx [966.924303ms] May 24 21:16:08.101: INFO: Created: latency-svc-mpkz2 May 24 21:16:08.114: INFO: Got endpoints: latency-svc-mpkz2 [956.518685ms] May 24 21:16:08.212: INFO: Created: latency-svc-btrbj May 24 21:16:08.228: INFO: Got endpoints: latency-svc-btrbj [993.472545ms] May 24 21:16:08.251: INFO: Created: latency-svc-2f5vm May 24 21:16:08.264: INFO: Got endpoints: latency-svc-2f5vm [985.85092ms] May 24 21:16:08.362: INFO: Created: latency-svc-jhb64 May 24 21:16:08.389: INFO: Got endpoints: latency-svc-jhb64 [1.027035168s] May 24 21:16:08.443: INFO: Created: latency-svc-jfdnv May 24 21:16:08.524: INFO: Got endpoints: latency-svc-jfdnv [1.120362063s] May 24 21:16:08.539: INFO: Created: latency-svc-z6hnl May 24 21:16:08.550: INFO: Got endpoints: latency-svc-z6hnl [1.110369253s] May 24 21:16:08.592: INFO: Created: latency-svc-x64cp May 24 21:16:08.604: INFO: Got endpoints: latency-svc-x64cp [1.080975544s] May 24 21:16:08.650: INFO: Created: latency-svc-kcpvg May 24 21:16:08.665: INFO: Got endpoints: latency-svc-kcpvg [1.093126247s] May 24 21:16:08.695: INFO: Created: latency-svc-kvms7 May 24 21:16:08.713: INFO: Got endpoints: latency-svc-kvms7 [1.039149538s] May 24 21:16:08.743: INFO: Created: latency-svc-pfbw4 May 24 21:16:08.805: INFO: Got endpoints: latency-svc-pfbw4 [1.082660408s] May 24 21:16:08.807: INFO: Created: latency-svc-b9bzg May 24 21:16:08.826: INFO: Got endpoints: latency-svc-b9bzg [1.060955153s] May 24 21:16:08.856: INFO: Created: latency-svc-89bgq May 24 21:16:08.882: INFO: Got endpoints: latency-svc-89bgq [1.025819735s] May 24 21:16:08.949: INFO: Created: latency-svc-79mdp May 24 21:16:08.952: INFO: Got endpoints: latency-svc-79mdp [1.021330405s] May 24 21:16:08.977: INFO: Created: latency-svc-6pmdm May 24 21:16:08.990: INFO: Got endpoints: latency-svc-6pmdm [1.008207627s] May 24 21:16:09.018: INFO: Created: latency-svc-r87pb May 24 21:16:09.032: INFO: Got endpoints: latency-svc-r87pb [969.201884ms] May 24 21:16:09.086: INFO: Created: latency-svc-rlbb2 May 24 21:16:09.090: INFO: Got endpoints: latency-svc-rlbb2 [976.036482ms] May 24 21:16:09.127: INFO: Created: latency-svc-zt5vr May 24 21:16:09.165: INFO: Got endpoints: latency-svc-zt5vr [937.4407ms] May 24 21:16:09.221: INFO: Created: latency-svc-clblj May 24 21:16:09.224: INFO: Got endpoints: latency-svc-clblj [959.558092ms] May 24 21:16:09.246: INFO: Created: latency-svc-92ngv May 24 21:16:09.294: INFO: Got endpoints: latency-svc-92ngv [904.463975ms] May 24 21:16:09.374: INFO: Created: latency-svc-6mksd May 24 21:16:09.399: INFO: Got endpoints: latency-svc-6mksd [875.521101ms] May 24 21:16:09.443: INFO: Created: latency-svc-wvslw May 24 21:16:09.459: INFO: Got endpoints: latency-svc-wvslw [909.292022ms] May 24 21:16:09.522: INFO: Created: latency-svc-jcr8l May 24 21:16:09.538: INFO: Got endpoints: latency-svc-jcr8l [933.450408ms] May 24 21:16:09.564: INFO: Created: latency-svc-mslc6 May 24 21:16:09.574: INFO: Got endpoints: latency-svc-mslc6 [908.38233ms] May 24 21:16:09.595: INFO: Created: latency-svc-qj2w4 May 24 21:16:09.607: INFO: Got endpoints: latency-svc-qj2w4 [893.858969ms] May 24 21:16:09.673: INFO: Created: latency-svc-vj7t9 May 24 21:16:09.686: INFO: Got endpoints: latency-svc-vj7t9 [880.574776ms] May 24 21:16:09.719: INFO: Created: latency-svc-zz7jq May 24 21:16:09.746: INFO: Got endpoints: latency-svc-zz7jq [920.542401ms] May 24 21:16:09.816: INFO: Created: latency-svc-65s6t May 24 21:16:09.830: INFO: Got endpoints: latency-svc-65s6t [948.597323ms] May 24 21:16:09.853: INFO: Created: latency-svc-9bcqp May 24 21:16:09.866: INFO: Got endpoints: latency-svc-9bcqp [914.004075ms] May 24 21:16:09.889: INFO: Created: latency-svc-fxtc8 May 24 21:16:09.960: INFO: Got endpoints: latency-svc-fxtc8 [970.755688ms] May 24 21:16:09.990: INFO: Created: latency-svc-v64t2 May 24 21:16:10.007: INFO: Got endpoints: latency-svc-v64t2 [974.689026ms] May 24 21:16:10.037: INFO: Created: latency-svc-w2wrj May 24 21:16:10.053: INFO: Got endpoints: latency-svc-w2wrj [963.113753ms] May 24 21:16:10.104: INFO: Created: latency-svc-h9dp9 May 24 21:16:10.114: INFO: Got endpoints: latency-svc-h9dp9 [948.913528ms] May 24 21:16:10.141: INFO: Created: latency-svc-cm2ph May 24 21:16:10.162: INFO: Got endpoints: latency-svc-cm2ph [937.464525ms] May 24 21:16:10.194: INFO: Created: latency-svc-v5srn May 24 21:16:10.254: INFO: Got endpoints: latency-svc-v5srn [960.517563ms] May 24 21:16:10.260: INFO: Created: latency-svc-zsft9 May 24 21:16:10.270: INFO: Got endpoints: latency-svc-zsft9 [870.139285ms] May 24 21:16:10.303: INFO: Created: latency-svc-m2c9n May 24 21:16:10.324: INFO: Got endpoints: latency-svc-m2c9n [864.620414ms] May 24 21:16:10.344: INFO: Created: latency-svc-bnbsn May 24 21:16:10.386: INFO: Got endpoints: latency-svc-bnbsn [847.593073ms] May 24 21:16:10.398: INFO: Created: latency-svc-kbzm2 May 24 21:16:10.445: INFO: Got endpoints: latency-svc-kbzm2 [871.461122ms] May 24 21:16:10.536: INFO: Created: latency-svc-ft6qb May 24 21:16:10.547: INFO: Got endpoints: latency-svc-ft6qb [939.256797ms] May 24 21:16:10.591: INFO: Created: latency-svc-nchzd May 24 21:16:10.613: INFO: Got endpoints: latency-svc-nchzd [927.230968ms] May 24 21:16:10.697: INFO: Created: latency-svc-6z4sq May 24 21:16:10.700: INFO: Got endpoints: latency-svc-6z4sq [953.848973ms] May 24 21:16:10.733: INFO: Created: latency-svc-snqc5 May 24 21:16:10.742: INFO: Got endpoints: latency-svc-snqc5 [911.835893ms] May 24 21:16:10.764: INFO: Created: latency-svc-kzgnl May 24 21:16:10.772: INFO: Got endpoints: latency-svc-kzgnl [906.491324ms] May 24 21:16:10.794: INFO: Created: latency-svc-nt58g May 24 21:16:10.859: INFO: Got endpoints: latency-svc-nt58g [898.121738ms] May 24 21:16:10.889: INFO: Created: latency-svc-smnhg May 24 21:16:10.905: INFO: Got endpoints: latency-svc-smnhg [898.501208ms] May 24 21:16:10.932: INFO: Created: latency-svc-86rrk May 24 21:16:10.944: INFO: Got endpoints: latency-svc-86rrk [891.459925ms] May 24 21:16:11.003: INFO: Created: latency-svc-g4t88 May 24 21:16:11.013: INFO: Got endpoints: latency-svc-g4t88 [899.160008ms] May 24 21:16:11.040: INFO: Created: latency-svc-szxvm May 24 21:16:11.055: INFO: Got endpoints: latency-svc-szxvm [893.920349ms] May 24 21:16:11.095: INFO: Created: latency-svc-xgz7c May 24 21:16:11.146: INFO: Got endpoints: latency-svc-xgz7c [892.002492ms] May 24 21:16:11.173: INFO: Created: latency-svc-6qrbk May 24 21:16:11.194: INFO: Got endpoints: latency-svc-6qrbk [924.509496ms] May 24 21:16:11.237: INFO: Created: latency-svc-qqfl9 May 24 21:16:11.278: INFO: Got endpoints: latency-svc-qqfl9 [953.405089ms] May 24 21:16:11.286: INFO: Created: latency-svc-swm8b May 24 21:16:11.316: INFO: Got endpoints: latency-svc-swm8b [930.439157ms] May 24 21:16:11.347: INFO: Created: latency-svc-r5tnh May 24 21:16:11.364: INFO: Got endpoints: latency-svc-r5tnh [918.518466ms] May 24 21:16:11.422: INFO: Created: latency-svc-v78td May 24 21:16:11.435: INFO: Got endpoints: latency-svc-v78td [887.893956ms] May 24 21:16:11.460: INFO: Created: latency-svc-zg979 May 24 21:16:11.483: INFO: Got endpoints: latency-svc-zg979 [869.705863ms] May 24 21:16:11.513: INFO: Created: latency-svc-zvb64 May 24 21:16:11.571: INFO: Got endpoints: latency-svc-zvb64 [871.149612ms] May 24 21:16:11.574: INFO: Created: latency-svc-r9xg7 May 24 21:16:11.580: INFO: Got endpoints: latency-svc-r9xg7 [837.681956ms] May 24 21:16:11.605: INFO: Created: latency-svc-c68lt May 24 21:16:11.622: INFO: Got endpoints: latency-svc-c68lt [849.608667ms] May 24 21:16:11.663: INFO: Created: latency-svc-d8ff7 May 24 21:16:11.739: INFO: Got endpoints: latency-svc-d8ff7 [880.307642ms] May 24 21:16:11.772: INFO: Created: latency-svc-6wwfp May 24 21:16:11.802: INFO: Got endpoints: latency-svc-6wwfp [896.803253ms] May 24 21:16:11.838: INFO: Created: latency-svc-xpldb May 24 21:16:11.877: INFO: Got endpoints: latency-svc-xpldb [932.350331ms] May 24 21:16:11.897: INFO: Created: latency-svc-zh7lt May 24 21:16:11.927: INFO: Got endpoints: latency-svc-zh7lt [913.467701ms] May 24 21:16:11.969: INFO: Created: latency-svc-j4qqs May 24 21:16:12.014: INFO: Got endpoints: latency-svc-j4qqs [958.730059ms] May 24 21:16:12.030: INFO: Created: latency-svc-x25tp May 24 21:16:12.047: INFO: Got endpoints: latency-svc-x25tp [900.792556ms] May 24 21:16:12.072: INFO: Created: latency-svc-xwlqs May 24 21:16:12.092: INFO: Got endpoints: latency-svc-xwlqs [897.540487ms] May 24 21:16:12.113: INFO: Created: latency-svc-bq2kv May 24 21:16:12.170: INFO: Got endpoints: latency-svc-bq2kv [892.641154ms] May 24 21:16:12.197: INFO: Created: latency-svc-wmqtp May 24 21:16:12.210: INFO: Got endpoints: latency-svc-wmqtp [893.867616ms] May 24 21:16:12.234: INFO: Created: latency-svc-r4bvh May 24 21:16:12.252: INFO: Got endpoints: latency-svc-r4bvh [888.047165ms] May 24 21:16:12.314: INFO: Created: latency-svc-pgrdd May 24 21:16:12.317: INFO: Got endpoints: latency-svc-pgrdd [881.891047ms] May 24 21:16:12.342: INFO: Created: latency-svc-hgznd May 24 21:16:12.382: INFO: Got endpoints: latency-svc-hgznd [899.499186ms] May 24 21:16:12.464: INFO: Created: latency-svc-97zcd May 24 21:16:12.469: INFO: Got endpoints: latency-svc-97zcd [896.956867ms] May 24 21:16:12.498: INFO: Created: latency-svc-ccp7w May 24 21:16:12.529: INFO: Got endpoints: latency-svc-ccp7w [948.866501ms] May 24 21:16:12.619: INFO: Created: latency-svc-lrfrj May 24 21:16:12.622: INFO: Got endpoints: latency-svc-lrfrj [1.000272978s] May 24 21:16:12.665: INFO: Created: latency-svc-kl922 May 24 21:16:12.680: INFO: Got endpoints: latency-svc-kl922 [940.464866ms] May 24 21:16:12.701: INFO: Created: latency-svc-bgcf2 May 24 21:16:12.716: INFO: Got endpoints: latency-svc-bgcf2 [913.669013ms] May 24 21:16:12.763: INFO: Created: latency-svc-cklrz May 24 21:16:12.771: INFO: Got endpoints: latency-svc-cklrz [893.656442ms] May 24 21:16:12.792: INFO: Created: latency-svc-6fn8m May 24 21:16:12.800: INFO: Got endpoints: latency-svc-6fn8m [873.250019ms] May 24 21:16:12.822: INFO: Created: latency-svc-2hhzm May 24 21:16:12.955: INFO: Got endpoints: latency-svc-2hhzm [940.568619ms] May 24 21:16:12.957: INFO: Created: latency-svc-jhqkd May 24 21:16:12.984: INFO: Got endpoints: latency-svc-jhqkd [936.547504ms] May 24 21:16:13.027: INFO: Created: latency-svc-688mn May 24 21:16:13.044: INFO: Got endpoints: latency-svc-688mn [951.811507ms] May 24 21:16:13.120: INFO: Created: latency-svc-pvhzr May 24 21:16:13.146: INFO: Got endpoints: latency-svc-pvhzr [975.212067ms] May 24 21:16:13.177: INFO: Created: latency-svc-jlg2v May 24 21:16:13.193: INFO: Got endpoints: latency-svc-jlg2v [983.25352ms] May 24 21:16:13.236: INFO: Created: latency-svc-ktzxs May 24 21:16:13.254: INFO: Got endpoints: latency-svc-ktzxs [1.001821709s] May 24 21:16:13.300: INFO: Created: latency-svc-h82sf May 24 21:16:13.314: INFO: Got endpoints: latency-svc-h82sf [997.248597ms] May 24 21:16:13.368: INFO: Created: latency-svc-gx7v4 May 24 21:16:13.396: INFO: Got endpoints: latency-svc-gx7v4 [1.013844862s] May 24 21:16:13.397: INFO: Created: latency-svc-h4sqb May 24 21:16:13.410: INFO: Got endpoints: latency-svc-h4sqb [941.739577ms] May 24 21:16:13.434: INFO: Created: latency-svc-g6jpz May 24 21:16:13.446: INFO: Got endpoints: latency-svc-g6jpz [917.527908ms] May 24 21:16:13.530: INFO: Created: latency-svc-jgtkm May 24 21:16:13.554: INFO: Got endpoints: latency-svc-jgtkm [931.381035ms] May 24 21:16:13.554: INFO: Created: latency-svc-chr69 May 24 21:16:13.567: INFO: Got endpoints: latency-svc-chr69 [887.293518ms] May 24 21:16:13.600: INFO: Created: latency-svc-k4xdd May 24 21:16:13.616: INFO: Got endpoints: latency-svc-k4xdd [899.878969ms] May 24 21:16:13.673: INFO: Created: latency-svc-qjtbs May 24 21:16:13.681: INFO: Got endpoints: latency-svc-qjtbs [910.760825ms] May 24 21:16:13.758: INFO: Created: latency-svc-k99b7 May 24 21:16:13.811: INFO: Got endpoints: latency-svc-k99b7 [1.010586995s] May 24 21:16:13.834: INFO: Created: latency-svc-2dp2p May 24 21:16:13.850: INFO: Got endpoints: latency-svc-2dp2p [895.528448ms] May 24 21:16:13.870: INFO: Created: latency-svc-f9dgc May 24 21:16:13.880: INFO: Got endpoints: latency-svc-f9dgc [896.453008ms] May 24 21:16:13.961: INFO: Created: latency-svc-4d4nx May 24 21:16:13.964: INFO: Got endpoints: latency-svc-4d4nx [920.212691ms] May 24 21:16:14.021: INFO: Created: latency-svc-fgz88 May 24 21:16:14.037: INFO: Got endpoints: latency-svc-fgz88 [891.284513ms] May 24 21:16:14.105: INFO: Created: latency-svc-qdn2j May 24 21:16:14.108: INFO: Got endpoints: latency-svc-qdn2j [914.220181ms] May 24 21:16:14.177: INFO: Created: latency-svc-2nkrp May 24 21:16:14.187: INFO: Got endpoints: latency-svc-2nkrp [933.071171ms] May 24 21:16:14.255: INFO: Created: latency-svc-d4997 May 24 21:16:14.258: INFO: Got endpoints: latency-svc-d4997 [944.479138ms] May 24 21:16:14.309: INFO: Created: latency-svc-hzvvh May 24 21:16:14.325: INFO: Got endpoints: latency-svc-hzvvh [929.111388ms] May 24 21:16:14.404: INFO: Created: latency-svc-zbg7r May 24 21:16:14.410: INFO: Got endpoints: latency-svc-zbg7r [999.528518ms] May 24 21:16:14.446: INFO: Created: latency-svc-xr2wv May 24 21:16:14.458: INFO: Got endpoints: latency-svc-xr2wv [1.011065162s] May 24 21:16:14.482: INFO: Created: latency-svc-mdjs6 May 24 21:16:14.494: INFO: Got endpoints: latency-svc-mdjs6 [940.400308ms] May 24 21:16:14.548: INFO: Created: latency-svc-bl9qj May 24 21:16:14.579: INFO: Created: latency-svc-hkzrn May 24 21:16:14.580: INFO: Got endpoints: latency-svc-bl9qj [1.012813365s] May 24 21:16:14.626: INFO: Got endpoints: latency-svc-hkzrn [1.010617729s] May 24 21:16:14.691: INFO: Created: latency-svc-dvdhf May 24 21:16:14.703: INFO: Got endpoints: latency-svc-dvdhf [1.021994735s] May 24 21:16:14.728: INFO: Created: latency-svc-8qnth May 24 21:16:14.746: INFO: Got endpoints: latency-svc-8qnth [935.098783ms] May 24 21:16:14.765: INFO: Created: latency-svc-vdvxm May 24 21:16:14.776: INFO: Got endpoints: latency-svc-vdvxm [925.506013ms] May 24 21:16:14.841: INFO: Created: latency-svc-wxnn8 May 24 21:16:14.866: INFO: Created: latency-svc-rcstv May 24 21:16:14.866: INFO: Got endpoints: latency-svc-wxnn8 [986.109786ms] May 24 21:16:14.890: INFO: Got endpoints: latency-svc-rcstv [926.311672ms] May 24 21:16:14.920: INFO: Created: latency-svc-c6dws May 24 21:16:14.932: INFO: Got endpoints: latency-svc-c6dws [895.493127ms] May 24 21:16:14.981: INFO: Created: latency-svc-v8c4p May 24 21:16:14.994: INFO: Got endpoints: latency-svc-v8c4p [886.270983ms] May 24 21:16:15.030: INFO: Created: latency-svc-zkwr8 May 24 21:16:15.042: INFO: Got endpoints: latency-svc-zkwr8 [854.936284ms] May 24 21:16:15.104: INFO: Created: latency-svc-75s8h May 24 21:16:15.136: INFO: Got endpoints: latency-svc-75s8h [877.736202ms] May 24 21:16:15.173: INFO: Created: latency-svc-4jdq4 May 24 21:16:15.186: INFO: Got endpoints: latency-svc-4jdq4 [860.666292ms] May 24 21:16:15.260: INFO: Created: latency-svc-fw6mk May 24 21:16:15.276: INFO: Got endpoints: latency-svc-fw6mk [865.872525ms] May 24 21:16:15.304: INFO: Created: latency-svc-657hx May 24 21:16:15.318: INFO: Got endpoints: latency-svc-657hx [860.763124ms] May 24 21:16:15.342: INFO: Created: latency-svc-qhgdw May 24 21:16:15.354: INFO: Got endpoints: latency-svc-qhgdw [859.790158ms] May 24 21:16:15.404: INFO: Created: latency-svc-wxnxt May 24 21:16:15.408: INFO: Got endpoints: latency-svc-wxnxt [828.615396ms] May 24 21:16:15.431: INFO: Created: latency-svc-lljh7 May 24 21:16:15.445: INFO: Got endpoints: latency-svc-lljh7 [818.777427ms] May 24 21:16:15.467: INFO: Created: latency-svc-kpmgf May 24 21:16:15.475: INFO: Got endpoints: latency-svc-kpmgf [771.647345ms] May 24 21:16:15.500: INFO: Created: latency-svc-ms675 May 24 21:16:15.535: INFO: Got endpoints: latency-svc-ms675 [789.570641ms] May 24 21:16:15.550: INFO: Created: latency-svc-hfc8t May 24 21:16:15.566: INFO: Got endpoints: latency-svc-hfc8t [790.459905ms] May 24 21:16:15.586: INFO: Created: latency-svc-9r9s5 May 24 21:16:15.603: INFO: Got endpoints: latency-svc-9r9s5 [736.370409ms] May 24 21:16:15.622: INFO: Created: latency-svc-gk7zs May 24 21:16:15.679: INFO: Got endpoints: latency-svc-gk7zs [788.685843ms] May 24 21:16:15.707: INFO: Created: latency-svc-pgzfr May 24 21:16:15.723: INFO: Got endpoints: latency-svc-pgzfr [790.297616ms] May 24 21:16:15.743: INFO: Created: latency-svc-d2qhd May 24 21:16:15.753: INFO: Got endpoints: latency-svc-d2qhd [759.172717ms] May 24 21:16:15.772: INFO: Created: latency-svc-jtq8d May 24 21:16:15.847: INFO: Got endpoints: latency-svc-jtq8d [805.173458ms] May 24 21:16:15.848: INFO: Created: latency-svc-hn474 May 24 21:16:15.858: INFO: Got endpoints: latency-svc-hn474 [721.955122ms] May 24 21:16:15.875: INFO: Created: latency-svc-tl8pd May 24 21:16:15.892: INFO: Got endpoints: latency-svc-tl8pd [705.726247ms] May 24 21:16:15.911: INFO: Created: latency-svc-jfngh May 24 21:16:15.922: INFO: Got endpoints: latency-svc-jfngh [646.260457ms] May 24 21:16:15.943: INFO: Created: latency-svc-b6vd4 May 24 21:16:15.978: INFO: Got endpoints: latency-svc-b6vd4 [659.923536ms] May 24 21:16:15.994: INFO: Created: latency-svc-hglz2 May 24 21:16:16.030: INFO: Got endpoints: latency-svc-hglz2 [675.869887ms] May 24 21:16:16.066: INFO: Created: latency-svc-ldwg7 May 24 21:16:16.110: INFO: Got endpoints: latency-svc-ldwg7 [701.660925ms] May 24 21:16:16.134: INFO: Created: latency-svc-ckr8d May 24 21:16:16.151: INFO: Got endpoints: latency-svc-ckr8d [705.980397ms] May 24 21:16:16.175: INFO: Created: latency-svc-jbh8c May 24 21:16:16.187: INFO: Got endpoints: latency-svc-jbh8c [712.238411ms] May 24 21:16:16.187: INFO: Latencies: [71.111982ms 114.210945ms 248.516442ms 256.584792ms 352.546774ms 511.415156ms 522.940416ms 601.84452ms 646.260457ms 659.923536ms 675.869887ms 691.166838ms 701.660925ms 705.726247ms 705.980397ms 712.238411ms 721.955122ms 733.11063ms 736.370409ms 749.857509ms 759.172717ms 771.647345ms 788.685843ms 789.570641ms 790.297616ms 790.459905ms 790.963618ms 798.459238ms 801.075736ms 805.173458ms 806.54159ms 818.777427ms 825.284616ms 828.615396ms 836.471535ms 837.681956ms 843.742113ms 847.593073ms 849.608667ms 849.89952ms 854.936284ms 859.790158ms 860.666292ms 860.763124ms 861.579554ms 864.620414ms 865.872525ms 868.032205ms 869.705863ms 870.139285ms 871.149612ms 871.461122ms 873.250019ms 875.521101ms 877.736202ms 880.307642ms 880.574776ms 881.891047ms 886.270983ms 887.293518ms 887.893956ms 888.047165ms 890.727824ms 891.284513ms 891.459925ms 892.002492ms 892.641154ms 893.656442ms 893.858969ms 893.867616ms 893.920349ms 895.493127ms 895.528448ms 896.453008ms 896.803253ms 896.956867ms 897.540487ms 898.121738ms 898.501208ms 899.160008ms 899.499186ms 899.878969ms 900.792556ms 902.01415ms 902.786211ms 904.463975ms 906.491324ms 908.011929ms 908.38233ms 909.292022ms 910.431075ms 910.760825ms 911.835893ms 913.467701ms 913.669013ms 914.004075ms 914.220181ms 917.527908ms 918.518466ms 920.212691ms 920.542401ms 924.509496ms 925.506013ms 926.311672ms 927.230968ms 929.111388ms 930.439157ms 931.381035ms 932.350331ms 933.071171ms 933.075139ms 933.450408ms 934.108997ms 935.098783ms 935.960457ms 936.547504ms 937.4407ms 937.464525ms 939.256797ms 940.400308ms 940.464866ms 940.568619ms 941.739577ms 943.887492ms 944.479138ms 948.315605ms 948.597323ms 948.841511ms 948.866501ms 948.913528ms 951.811507ms 953.405089ms 953.848973ms 954.74382ms 956.518685ms 956.561581ms 958.730059ms 959.558092ms 960.517563ms 963.113753ms 966.924303ms 969.201884ms 970.755688ms 971.229159ms 971.387641ms 974.689026ms 975.212067ms 976.036482ms 977.323915ms 983.25352ms 985.85092ms 986.109786ms 993.472545ms 997.140081ms 997.248597ms 999.528518ms 1.000272978s 1.001821709s 1.002695121s 1.005438163s 1.006427439s 1.008207627s 1.008661849s 1.010586995s 1.010617729s 1.011065162s 1.012813365s 1.013844862s 1.021330405s 1.021994735s 1.022034754s 1.025819735s 1.026747405s 1.027035168s 1.033634343s 1.039149538s 1.045159862s 1.050332681s 1.060955153s 1.08023538s 1.080975544s 1.082660408s 1.093126247s 1.100581997s 1.110369253s 1.115253108s 1.11907054s 1.120362063s 1.123158613s 1.130429761s 1.134246234s 1.143495738s 1.152729523s 1.159032102s 1.165013943s 1.166534267s 1.167340612s 1.185531272s 1.192081651s 1.204589224s] May 24 21:16:16.187: INFO: 50 %ile: 920.542401ms May 24 21:16:16.187: INFO: 90 %ile: 1.080975544s May 24 21:16:16.187: INFO: 99 %ile: 1.192081651s May 24 21:16:16.187: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:16:16.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6321" for this suite. • [SLOW TEST:17.084 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":34,"skipped":503,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:16:16.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f5646d6b-9e3f-4304-b193-443ad69627d8 STEP: Creating a pod to test consume configMaps May 24 21:16:16.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961" in namespace "configmap-9714" to be "success or failure" May 24 21:16:16.307: INFO: Pod "pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142122ms May 24 21:16:18.311: INFO: Pod "pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008626205s May 24 21:16:20.316: INFO: Pod "pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012995516s STEP: Saw pod success May 24 21:16:20.316: INFO: Pod "pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961" satisfied condition "success or failure" May 24 21:16:20.319: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961 container configmap-volume-test: STEP: delete the pod May 24 21:16:20.381: INFO: Waiting for pod pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961 to disappear May 24 21:16:20.391: INFO: Pod pod-configmaps-3fae9483-8eae-4a9c-b524-8c37a7854961 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:16:20.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9714" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":505,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:16:20.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0524 21:16:50.978558 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 21:16:50.978: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:16:50.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9044" for this suite. • [SLOW TEST:30.588 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":36,"skipped":512,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:16:50.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:16:55.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8485" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":517,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:16:55.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 24 21:16:55.221: INFO: Waiting up to 5m0s for pod "pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e" in namespace "emptydir-431" to be "success or failure" May 24 21:16:55.224: INFO: Pod "pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012498ms May 24 21:16:57.543: INFO: Pod "pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321178593s May 24 21:16:59.546: INFO: Pod "pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.32456091s STEP: Saw pod success May 24 21:16:59.546: INFO: Pod "pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e" satisfied condition "success or failure" May 24 21:16:59.548: INFO: Trying to get logs from node jerma-worker2 pod pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e container test-container: STEP: delete the pod May 24 21:16:59.597: INFO: Waiting for pod pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e to disappear May 24 21:16:59.602: INFO: Pod pod-3a2d386a-28a8-4d82-95bf-2293dce8a55e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:16:59.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-431" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":530,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:16:59.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-ba9b7039-5164-43cb-af7d-4dafbdfc7b29 STEP: Creating a pod to test consume configMaps May 24 21:16:59.665: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257" in namespace "projected-5496" to be "success or failure" May 24 21:16:59.668: INFO: Pod "pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375397ms May 24 21:17:01.716: INFO: Pod "pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050440988s May 24 21:17:03.719: INFO: Pod "pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053676697s STEP: Saw pod success May 24 21:17:03.719: INFO: Pod "pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257" satisfied condition "success or failure" May 24 21:17:03.722: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257 container projected-configmap-volume-test: STEP: delete the pod May 24 21:17:03.772: INFO: Waiting for pod pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257 to disappear May 24 21:17:03.778: INFO: Pod pod-projected-configmaps-f1b174c2-402e-4b99-9d86-c1420666e257 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:17:03.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5496" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:17:03.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:17:14.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-223" for this suite. • [SLOW TEST:11.118 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":40,"skipped":605,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:17:14.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:17:32.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8871" for this suite. • [SLOW TEST:17.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":41,"skipped":611,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:17:32.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 24 21:17:32.092: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 24 21:17:32.396: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 24 21:17:34.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:17:36.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725951852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:17:39.338: INFO: Waited 523.685784ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:17:39.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5886" for this suite. • [SLOW TEST:7.849 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":42,"skipped":613,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:17:39.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:17:44.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4686" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":636,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:17:44.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 24 21:17:44.217: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:17:51.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8117" for this suite. • [SLOW TEST:7.754 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":44,"skipped":641,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:17:51.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 24 21:17:52.012: INFO: Waiting up to 5m0s for pod "pod-ec813ca7-b732-4c65-943f-3680aba3d696" in namespace "emptydir-9214" to be "success or failure" May 24 21:17:52.039: INFO: Pod "pod-ec813ca7-b732-4c65-943f-3680aba3d696": Phase="Pending", Reason="", readiness=false. Elapsed: 26.205182ms May 24 21:17:54.043: INFO: Pod "pod-ec813ca7-b732-4c65-943f-3680aba3d696": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030203376s May 24 21:17:56.046: INFO: Pod "pod-ec813ca7-b732-4c65-943f-3680aba3d696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033534992s STEP: Saw pod success May 24 21:17:56.046: INFO: Pod "pod-ec813ca7-b732-4c65-943f-3680aba3d696" satisfied condition "success or failure" May 24 21:17:56.048: INFO: Trying to get logs from node jerma-worker2 pod pod-ec813ca7-b732-4c65-943f-3680aba3d696 container test-container: STEP: delete the pod May 24 21:17:56.062: INFO: Waiting for pod pod-ec813ca7-b732-4c65-943f-3680aba3d696 to disappear May 24 21:17:56.067: INFO: Pod pod-ec813ca7-b732-4c65-943f-3680aba3d696 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:17:56.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9214" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:17:56.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:18:00.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1478" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:18:00.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8375 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 24 21:18:00.501: INFO: Found 0 stateful pods, waiting for 3 May 24 21:18:10.506: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 21:18:10.506: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 21:18:10.506: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 24 21:18:20.507: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 21:18:20.507: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 21:18:20.507: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 24 21:18:20.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8375 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 21:18:20.768: INFO: stderr: "I0524 21:18:20.645011 65 log.go:172] (0xc000028f20) (0xc000717d60) Create stream\nI0524 21:18:20.645061 65 log.go:172] (0xc000028f20) (0xc000717d60) Stream added, broadcasting: 1\nI0524 21:18:20.647621 65 log.go:172] (0xc000028f20) Reply frame received for 1\nI0524 21:18:20.647664 65 log.go:172] (0xc000028f20) (0xc0006dc000) Create stream\nI0524 21:18:20.647677 65 log.go:172] (0xc000028f20) (0xc0006dc000) Stream added, broadcasting: 3\nI0524 21:18:20.648701 65 log.go:172] (0xc000028f20) Reply frame received for 3\nI0524 21:18:20.648744 65 log.go:172] (0xc000028f20) (0xc0002214a0) Create stream\nI0524 21:18:20.648769 65 log.go:172] (0xc000028f20) (0xc0002214a0) Stream added, broadcasting: 5\nI0524 21:18:20.650098 65 log.go:172] (0xc000028f20) Reply frame received for 5\nI0524 21:18:20.709529 65 log.go:172] (0xc000028f20) Data frame received for 5\nI0524 21:18:20.709561 65 log.go:172] (0xc0002214a0) (5) Data frame handling\nI0524 21:18:20.709582 65 log.go:172] (0xc0002214a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 21:18:20.760006 65 log.go:172] (0xc000028f20) Data frame received for 3\nI0524 21:18:20.760052 65 log.go:172] (0xc0006dc000) (3) Data frame handling\nI0524 21:18:20.760091 65 log.go:172] (0xc0006dc000) (3) Data frame sent\nI0524 21:18:20.760209 65 log.go:172] (0xc000028f20) Data frame received for 5\nI0524 21:18:20.760221 65 log.go:172] (0xc0002214a0) (5) Data frame handling\nI0524 21:18:20.760324 65 log.go:172] (0xc000028f20) Data frame received for 3\nI0524 21:18:20.760350 65 log.go:172] (0xc0006dc000) (3) Data frame handling\nI0524 21:18:20.762217 65 log.go:172] (0xc000028f20) Data frame received for 1\nI0524 21:18:20.762235 65 log.go:172] (0xc000717d60) (1) Data frame handling\nI0524 21:18:20.762244 65 log.go:172] (0xc000717d60) (1) Data frame sent\nI0524 21:18:20.762545 65 log.go:172] (0xc000028f20) (0xc000717d60) Stream removed, broadcasting: 1\nI0524 21:18:20.762632 65 log.go:172] (0xc000028f20) Go away received\nI0524 21:18:20.762988 65 log.go:172] (0xc000028f20) (0xc000717d60) Stream removed, broadcasting: 1\nI0524 21:18:20.763011 65 log.go:172] (0xc000028f20) (0xc0006dc000) Stream removed, broadcasting: 3\nI0524 21:18:20.763023 65 log.go:172] (0xc000028f20) (0xc0002214a0) Stream removed, broadcasting: 5\n" May 24 21:18:20.768: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 21:18:20.768: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 24 21:18:30.802: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 24 21:18:40.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8375 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 21:18:41.100: INFO: stderr: "I0524 21:18:40.988523 85 log.go:172] (0xc0006faa50) (0xc0006de1e0) Create stream\nI0524 21:18:40.988577 85 log.go:172] (0xc0006faa50) (0xc0006de1e0) Stream added, broadcasting: 1\nI0524 21:18:40.991216 85 log.go:172] (0xc0006faa50) Reply frame received for 1\nI0524 21:18:40.991267 85 log.go:172] (0xc0006faa50) (0xc0006de280) Create stream\nI0524 21:18:40.991280 85 log.go:172] (0xc0006faa50) (0xc0006de280) Stream added, broadcasting: 3\nI0524 21:18:40.992331 85 log.go:172] (0xc0006faa50) Reply frame received for 3\nI0524 21:18:40.992359 85 log.go:172] (0xc0006faa50) (0xc000471400) Create stream\nI0524 21:18:40.992369 85 log.go:172] (0xc0006faa50) (0xc000471400) Stream added, broadcasting: 5\nI0524 21:18:40.994388 85 log.go:172] (0xc0006faa50) Reply frame received for 5\nI0524 21:18:41.093663 85 log.go:172] (0xc0006faa50) Data frame received for 5\nI0524 21:18:41.093793 85 log.go:172] (0xc000471400) (5) Data frame handling\nI0524 21:18:41.093819 85 log.go:172] (0xc000471400) (5) Data frame sent\nI0524 21:18:41.093831 85 log.go:172] (0xc0006faa50) Data frame received for 5\nI0524 21:18:41.093840 85 log.go:172] (0xc000471400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 21:18:41.093864 85 log.go:172] (0xc0006faa50) Data frame received for 3\nI0524 21:18:41.093900 85 log.go:172] (0xc0006de280) (3) Data frame handling\nI0524 21:18:41.093927 85 log.go:172] (0xc0006de280) (3) Data frame sent\nI0524 21:18:41.093943 85 log.go:172] (0xc0006faa50) Data frame received for 3\nI0524 21:18:41.093964 85 log.go:172] (0xc0006de280) (3) Data frame handling\nI0524 21:18:41.095272 85 log.go:172] (0xc0006faa50) Data frame received for 1\nI0524 21:18:41.095300 85 log.go:172] (0xc0006de1e0) (1) Data frame handling\nI0524 21:18:41.095315 85 log.go:172] (0xc0006de1e0) (1) Data frame sent\nI0524 21:18:41.095350 85 log.go:172] (0xc0006faa50) (0xc0006de1e0) Stream removed, broadcasting: 1\nI0524 21:18:41.095376 85 log.go:172] (0xc0006faa50) Go away received\nI0524 21:18:41.095748 85 log.go:172] (0xc0006faa50) (0xc0006de1e0) Stream removed, broadcasting: 1\nI0524 21:18:41.095772 85 log.go:172] (0xc0006faa50) (0xc0006de280) Stream removed, broadcasting: 3\nI0524 21:18:41.095781 85 log.go:172] (0xc0006faa50) (0xc000471400) Stream removed, broadcasting: 5\n" May 24 21:18:41.101: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 21:18:41.101: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 21:18:51.122: INFO: Waiting for StatefulSet statefulset-8375/ss2 to complete update May 24 21:18:51.122: INFO: Waiting for Pod statefulset-8375/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 21:18:51.122: INFO: Waiting for Pod statefulset-8375/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 21:19:01.129: INFO: Waiting for StatefulSet statefulset-8375/ss2 to complete update May 24 21:19:01.129: INFO: Waiting for Pod statefulset-8375/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 21:19:11.131: INFO: Waiting for StatefulSet statefulset-8375/ss2 to complete update STEP: Rolling back to a previous revision May 24 21:19:21.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8375 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 21:19:21.388: INFO: stderr: "I0524 21:19:21.266556 108 log.go:172] (0xc000ad0c60) (0xc000ac43c0) Create stream\nI0524 21:19:21.267447 108 log.go:172] (0xc000ad0c60) (0xc000ac43c0) Stream added, broadcasting: 1\nI0524 21:19:21.271070 108 log.go:172] (0xc000ad0c60) Reply frame received for 1\nI0524 21:19:21.271107 108 log.go:172] (0xc000ad0c60) (0xc00064c640) Create stream\nI0524 21:19:21.271115 108 log.go:172] (0xc000ad0c60) (0xc00064c640) Stream added, broadcasting: 3\nI0524 21:19:21.272008 108 log.go:172] (0xc000ad0c60) Reply frame received for 3\nI0524 21:19:21.272043 108 log.go:172] (0xc000ad0c60) (0xc00074b400) Create stream\nI0524 21:19:21.272053 108 log.go:172] (0xc000ad0c60) (0xc00074b400) Stream added, broadcasting: 5\nI0524 21:19:21.272929 108 log.go:172] (0xc000ad0c60) Reply frame received for 5\nI0524 21:19:21.352143 108 log.go:172] (0xc000ad0c60) Data frame received for 5\nI0524 21:19:21.352169 108 log.go:172] (0xc00074b400) (5) Data frame handling\nI0524 21:19:21.352188 108 log.go:172] (0xc00074b400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 21:19:21.379240 108 log.go:172] (0xc000ad0c60) Data frame received for 3\nI0524 21:19:21.379281 108 log.go:172] (0xc00064c640) (3) Data frame handling\nI0524 21:19:21.379416 108 log.go:172] (0xc00064c640) (3) Data frame sent\nI0524 21:19:21.379451 108 log.go:172] (0xc000ad0c60) Data frame received for 5\nI0524 21:19:21.379471 108 log.go:172] (0xc00074b400) (5) Data frame handling\nI0524 21:19:21.379767 108 log.go:172] (0xc000ad0c60) Data frame received for 3\nI0524 21:19:21.379792 108 log.go:172] (0xc00064c640) (3) Data frame handling\nI0524 21:19:21.381756 108 log.go:172] (0xc000ad0c60) Data frame received for 1\nI0524 21:19:21.381778 108 log.go:172] (0xc000ac43c0) (1) Data frame handling\nI0524 21:19:21.381789 108 log.go:172] (0xc000ac43c0) (1) Data frame sent\nI0524 21:19:21.381803 108 log.go:172] (0xc000ad0c60) (0xc000ac43c0) Stream removed, broadcasting: 1\nI0524 21:19:21.381923 108 log.go:172] (0xc000ad0c60) Go away received\nI0524 21:19:21.382202 108 log.go:172] (0xc000ad0c60) (0xc000ac43c0) Stream removed, broadcasting: 1\nI0524 21:19:21.382235 108 log.go:172] (0xc000ad0c60) (0xc00064c640) Stream removed, broadcasting: 3\nI0524 21:19:21.382254 108 log.go:172] (0xc000ad0c60) (0xc00074b400) Stream removed, broadcasting: 5\n" May 24 21:19:21.389: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 21:19:21.389: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 21:19:31.420: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 24 21:19:41.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8375 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 21:19:41.700: INFO: stderr: "I0524 21:19:41.586498 127 log.go:172] (0xc00079ea50) (0xc000665c20) Create stream\nI0524 21:19:41.586570 127 log.go:172] (0xc00079ea50) (0xc000665c20) Stream added, broadcasting: 1\nI0524 21:19:41.589463 127 log.go:172] (0xc00079ea50) Reply frame received for 1\nI0524 21:19:41.589511 127 log.go:172] (0xc00079ea50) (0xc000aaa000) Create stream\nI0524 21:19:41.589524 127 log.go:172] (0xc00079ea50) (0xc000aaa000) Stream added, broadcasting: 3\nI0524 21:19:41.590597 127 log.go:172] (0xc00079ea50) Reply frame received for 3\nI0524 21:19:41.590647 127 log.go:172] (0xc00079ea50) (0xc000428000) Create stream\nI0524 21:19:41.590684 127 log.go:172] (0xc00079ea50) (0xc000428000) Stream added, broadcasting: 5\nI0524 21:19:41.591820 127 log.go:172] (0xc00079ea50) Reply frame received for 5\nI0524 21:19:41.694180 127 log.go:172] (0xc00079ea50) Data frame received for 5\nI0524 21:19:41.694245 127 log.go:172] (0xc000428000) (5) Data frame handling\nI0524 21:19:41.694265 127 log.go:172] (0xc000428000) (5) Data frame sent\nI0524 21:19:41.694280 127 log.go:172] (0xc00079ea50) Data frame received for 5\nI0524 21:19:41.694292 127 log.go:172] (0xc000428000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 21:19:41.694334 127 log.go:172] (0xc00079ea50) Data frame received for 3\nI0524 21:19:41.694358 127 log.go:172] (0xc000aaa000) (3) Data frame handling\nI0524 21:19:41.694380 127 log.go:172] (0xc000aaa000) (3) Data frame sent\nI0524 21:19:41.694444 127 log.go:172] (0xc00079ea50) Data frame received for 3\nI0524 21:19:41.694457 127 log.go:172] (0xc000aaa000) (3) Data frame handling\nI0524 21:19:41.695697 127 log.go:172] (0xc00079ea50) Data frame received for 1\nI0524 21:19:41.695721 127 log.go:172] (0xc000665c20) (1) Data frame handling\nI0524 21:19:41.695740 127 log.go:172] (0xc000665c20) (1) Data frame sent\nI0524 21:19:41.695760 127 log.go:172] (0xc00079ea50) (0xc000665c20) Stream removed, broadcasting: 1\nI0524 21:19:41.695772 127 log.go:172] (0xc00079ea50) Go away received\nI0524 21:19:41.696161 127 log.go:172] (0xc00079ea50) (0xc000665c20) Stream removed, broadcasting: 1\nI0524 21:19:41.696190 127 log.go:172] (0xc00079ea50) (0xc000aaa000) Stream removed, broadcasting: 3\nI0524 21:19:41.696211 127 log.go:172] (0xc00079ea50) (0xc000428000) Stream removed, broadcasting: 5\n" May 24 21:19:41.700: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 21:19:41.700: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 21:19:51.719: INFO: Waiting for StatefulSet statefulset-8375/ss2 to complete update May 24 21:19:51.719: INFO: Waiting for Pod statefulset-8375/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 24 21:19:51.719: INFO: Waiting for Pod statefulset-8375/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 24 21:20:01.729: INFO: Waiting for StatefulSet statefulset-8375/ss2 to complete update May 24 21:20:01.729: INFO: Waiting for Pod statefulset-8375/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 24 21:20:11.726: INFO: Deleting all statefulset in ns statefulset-8375 May 24 21:20:11.728: INFO: Scaling statefulset ss2 to 0 May 24 21:20:31.762: INFO: Waiting for statefulset status.replicas updated to 0 May 24 21:20:31.764: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:20:31.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8375" for this suite. • [SLOW TEST:151.403 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":47,"skipped":723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:20:31.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 24 21:20:36.437: INFO: Successfully updated pod "labelsupdate8a203da6-1a4d-4ef3-8243-7be00462d553" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:20:40.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8886" for this suite. • [SLOW TEST:8.726 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":756,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:20:40.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 24 21:20:41.217: INFO: Pod name wrapped-volume-race-a2cb568a-973e-44e6-8ed6-926bbfefe33f: Found 0 pods out of 5 May 24 21:20:46.238: INFO: Pod name wrapped-volume-race-a2cb568a-973e-44e6-8ed6-926bbfefe33f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a2cb568a-973e-44e6-8ed6-926bbfefe33f in namespace emptydir-wrapper-3110, will wait for the garbage collector to delete the pods May 24 21:21:00.567: INFO: Deleting ReplicationController wrapped-volume-race-a2cb568a-973e-44e6-8ed6-926bbfefe33f took: 10.906088ms May 24 21:21:00.867: INFO: Terminating ReplicationController wrapped-volume-race-a2cb568a-973e-44e6-8ed6-926bbfefe33f pods took: 300.328989ms STEP: Creating RC which spawns configmap-volume pods May 24 21:21:09.762: INFO: Pod name wrapped-volume-race-bc2540fc-5fb7-441d-b9a8-c69d222411ea: Found 0 pods out of 5 May 24 21:21:14.771: INFO: Pod name wrapped-volume-race-bc2540fc-5fb7-441d-b9a8-c69d222411ea: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bc2540fc-5fb7-441d-b9a8-c69d222411ea in namespace emptydir-wrapper-3110, will wait for the garbage collector to delete the pods May 24 21:21:30.858: INFO: Deleting ReplicationController wrapped-volume-race-bc2540fc-5fb7-441d-b9a8-c69d222411ea took: 6.686408ms May 24 21:21:31.258: INFO: Terminating ReplicationController wrapped-volume-race-bc2540fc-5fb7-441d-b9a8-c69d222411ea pods took: 400.241301ms STEP: Creating RC which spawns configmap-volume pods May 24 21:21:39.713: INFO: Pod name wrapped-volume-race-1f57fccd-b1d9-4001-bc51-1f828ed0e7d7: Found 0 pods out of 5 May 24 21:21:44.722: INFO: Pod name wrapped-volume-race-1f57fccd-b1d9-4001-bc51-1f828ed0e7d7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1f57fccd-b1d9-4001-bc51-1f828ed0e7d7 in namespace emptydir-wrapper-3110, will wait for the garbage collector to delete the pods May 24 21:21:56.905: INFO: Deleting ReplicationController wrapped-volume-race-1f57fccd-b1d9-4001-bc51-1f828ed0e7d7 took: 6.718803ms May 24 21:21:57.305: INFO: Terminating ReplicationController wrapped-volume-race-1f57fccd-b1d9-4001-bc51-1f828ed0e7d7 pods took: 400.20836ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:22:10.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3110" for this suite. • [SLOW TEST:90.377 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":49,"skipped":778,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:22:10.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-fe7969d5-a7c8-43fd-9c69-52cfab63a901 in namespace container-probe-3620 May 24 21:22:15.054: INFO: Started pod busybox-fe7969d5-a7c8-43fd-9c69-52cfab63a901 in namespace container-probe-3620 STEP: checking the pod's current state and verifying that restartCount is present May 24 21:22:15.057: INFO: Initial restart count of pod busybox-fe7969d5-a7c8-43fd-9c69-52cfab63a901 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:26:15.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3620" for this suite. • [SLOW TEST:244.955 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":795,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:26:15.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:26:16.492: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:26:18.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:26:20.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952376, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:26:23.571: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:26:23.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8381-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:26:24.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-931" for this suite. STEP: Destroying namespace "webhook-931-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.061 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":51,"skipped":815,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:26:24.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:26:25.635: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:26:27.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952385, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952385, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952385, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725952385, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:26:30.693: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:26:30.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-613" for this suite. STEP: Destroying namespace "webhook-613-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.034 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":52,"skipped":816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:26:30.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-12ba3f0d-a910-41a6-98bd-204545280736 STEP: Creating a pod to test consume secrets May 24 21:26:31.080: INFO: Waiting up to 5m0s for pod "pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196" in namespace "secrets-3770" to be "success or failure" May 24 21:26:31.083: INFO: Pod "pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196": Phase="Pending", Reason="", readiness=false. Elapsed: 3.884964ms May 24 21:26:33.088: INFO: Pod "pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039304s May 24 21:26:35.092: INFO: Pod "pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012077242s STEP: Saw pod success May 24 21:26:35.092: INFO: Pod "pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196" satisfied condition "success or failure" May 24 21:26:35.095: INFO: Trying to get logs from node jerma-worker pod pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196 container secret-volume-test: STEP: delete the pod May 24 21:26:35.144: INFO: Waiting for pod pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196 to disappear May 24 21:26:35.177: INFO: Pod pod-secrets-cd4d1284-6e34-49b6-ae95-f7765a3fb196 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:26:35.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3770" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":848,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:26:35.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5133.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 201.69.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.69.201_udp@PTR;check="$$(dig +tcp +noall +answer +search 201.69.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.69.201_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5133.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 201.69.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.69.201_udp@PTR;check="$$(dig +tcp +noall +answer +search 201.69.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.69.201_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 21:26:41.359: INFO: Unable to read wheezy_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.384: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.387: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.433: INFO: Unable to read jessie_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.440: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:41.473: INFO: Lookups using dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78 failed for: [wheezy_udp@dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_udp@dns-test-service.dns-5133.svc.cluster.local jessie_tcp@dns-test-service.dns-5133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local] May 24 21:26:46.478: INFO: Unable to read wheezy_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.481: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.485: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.488: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.510: INFO: Unable to read jessie_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.513: INFO: Unable to read jessie_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.518: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:46.531: INFO: Lookups using dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78 failed for: [wheezy_udp@dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_udp@dns-test-service.dns-5133.svc.cluster.local jessie_tcp@dns-test-service.dns-5133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local] May 24 21:26:51.477: INFO: Unable to read wheezy_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.480: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.483: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.486: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.508: INFO: Unable to read jessie_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.518: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:51.555: INFO: Lookups using dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78 failed for: [wheezy_udp@dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_udp@dns-test-service.dns-5133.svc.cluster.local jessie_tcp@dns-test-service.dns-5133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local] May 24 21:26:56.477: INFO: Unable to read wheezy_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.481: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.484: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.487: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.507: INFO: Unable to read jessie_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.510: INFO: Unable to read jessie_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:26:56.536: INFO: Lookups using dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78 failed for: [wheezy_udp@dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_udp@dns-test-service.dns-5133.svc.cluster.local jessie_tcp@dns-test-service.dns-5133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local] May 24 21:27:01.517: INFO: Unable to read wheezy_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.520: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.523: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.526: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.548: INFO: Unable to read jessie_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.551: INFO: Unable to read jessie_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.555: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.558: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:01.577: INFO: Lookups using dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78 failed for: [wheezy_udp@dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_udp@dns-test-service.dns-5133.svc.cluster.local jessie_tcp@dns-test-service.dns-5133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local] May 24 21:27:06.478: INFO: Unable to read wheezy_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.482: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.484: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.487: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.511: INFO: Unable to read jessie_udp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.514: INFO: Unable to read jessie_tcp@dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.516: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.519: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local from pod dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78: the server could not find the requested resource (get pods dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78) May 24 21:27:06.537: INFO: Lookups using dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78 failed for: [wheezy_udp@dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@dns-test-service.dns-5133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_udp@dns-test-service.dns-5133.svc.cluster.local jessie_tcp@dns-test-service.dns-5133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5133.svc.cluster.local] May 24 21:27:11.543: INFO: DNS probes using dns-5133/dns-test-b8faf374-d61d-4f02-b68c-e4623c086b78 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:27:11.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5133" for this suite. • [SLOW TEST:36.838 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":54,"skipped":852,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:27:12.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2459 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2459 STEP: Creating statefulset with conflicting port in namespace statefulset-2459 STEP: Waiting until pod test-pod will start running in namespace statefulset-2459 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2459 May 24 21:27:18.503: INFO: Observed stateful pod in namespace: statefulset-2459, name: ss-0, uid: 067aa6e9-1646-464a-8884-50088d96f2fa, status phase: Pending. Waiting for statefulset controller to delete. May 24 21:27:18.882: INFO: Observed stateful pod in namespace: statefulset-2459, name: ss-0, uid: 067aa6e9-1646-464a-8884-50088d96f2fa, status phase: Failed. Waiting for statefulset controller to delete. May 24 21:27:18.889: INFO: Observed stateful pod in namespace: statefulset-2459, name: ss-0, uid: 067aa6e9-1646-464a-8884-50088d96f2fa, status phase: Failed. Waiting for statefulset controller to delete. May 24 21:27:18.901: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2459 STEP: Removing pod with conflicting port in namespace statefulset-2459 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2459 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 24 21:27:24.962: INFO: Deleting all statefulset in ns statefulset-2459 May 24 21:27:24.964: INFO: Scaling statefulset ss to 0 May 24 21:27:34.998: INFO: Waiting for statefulset status.replicas updated to 0 May 24 21:27:35.002: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:27:35.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2459" for this suite. • [SLOW TEST:23.001 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":55,"skipped":946,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:27:35.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5718 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 21:27:35.105: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 21:27:59.201: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.109 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5718 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:27:59.202: INFO: >>> kubeConfig: /root/.kube/config I0524 21:27:59.236936 6 log.go:172] (0xc0016ccbb0) (0xc0015394a0) Create stream I0524 21:27:59.236968 6 log.go:172] (0xc0016ccbb0) (0xc0015394a0) Stream added, broadcasting: 1 I0524 21:27:59.239325 6 log.go:172] (0xc0016ccbb0) Reply frame received for 1 I0524 21:27:59.239368 6 log.go:172] (0xc0016ccbb0) (0xc001e723c0) Create stream I0524 21:27:59.239387 6 log.go:172] (0xc0016ccbb0) (0xc001e723c0) Stream added, broadcasting: 3 I0524 21:27:59.240378 6 log.go:172] (0xc0016ccbb0) Reply frame received for 3 I0524 21:27:59.240441 6 log.go:172] (0xc0016ccbb0) (0xc001e72460) Create stream I0524 21:27:59.240457 6 log.go:172] (0xc0016ccbb0) (0xc001e72460) Stream added, broadcasting: 5 I0524 21:27:59.241858 6 log.go:172] (0xc0016ccbb0) Reply frame received for 5 I0524 21:28:00.404400 6 log.go:172] (0xc0016ccbb0) Data frame received for 3 I0524 21:28:00.404463 6 log.go:172] (0xc001e723c0) (3) Data frame handling I0524 21:28:00.404505 6 log.go:172] (0xc001e723c0) (3) Data frame sent I0524 21:28:00.404588 6 log.go:172] (0xc0016ccbb0) Data frame received for 3 I0524 21:28:00.404628 6 log.go:172] (0xc001e723c0) (3) Data frame handling I0524 21:28:00.405078 6 log.go:172] (0xc0016ccbb0) Data frame received for 5 I0524 21:28:00.405329 6 log.go:172] (0xc001e72460) (5) Data frame handling I0524 21:28:00.407291 6 log.go:172] (0xc0016ccbb0) Data frame received for 1 I0524 21:28:00.407328 6 log.go:172] (0xc0015394a0) (1) Data frame handling I0524 21:28:00.407372 6 log.go:172] (0xc0015394a0) (1) Data frame sent I0524 21:28:00.407410 6 log.go:172] (0xc0016ccbb0) (0xc0015394a0) Stream removed, broadcasting: 1 I0524 21:28:00.407462 6 log.go:172] (0xc0016ccbb0) Go away received I0524 21:28:00.407615 6 log.go:172] (0xc0016ccbb0) (0xc0015394a0) Stream removed, broadcasting: 1 I0524 21:28:00.407689 6 log.go:172] (0xc0016ccbb0) (0xc001e723c0) Stream removed, broadcasting: 3 I0524 21:28:00.407748 6 log.go:172] (0xc0016ccbb0) (0xc001e72460) Stream removed, broadcasting: 5 May 24 21:28:00.407: INFO: Found all expected endpoints: [netserver-0] May 24 21:28:00.411: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.165 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5718 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:28:00.411: INFO: >>> kubeConfig: /root/.kube/config I0524 21:28:00.443815 6 log.go:172] (0xc0016cd130) (0xc001539720) Create stream I0524 21:28:00.443857 6 log.go:172] (0xc0016cd130) (0xc001539720) Stream added, broadcasting: 1 I0524 21:28:00.446645 6 log.go:172] (0xc0016cd130) Reply frame received for 1 I0524 21:28:00.446684 6 log.go:172] (0xc0016cd130) (0xc0027a5c20) Create stream I0524 21:28:00.446700 6 log.go:172] (0xc0016cd130) (0xc0027a5c20) Stream added, broadcasting: 3 I0524 21:28:00.447846 6 log.go:172] (0xc0016cd130) Reply frame received for 3 I0524 21:28:00.447922 6 log.go:172] (0xc0016cd130) (0xc0027a5cc0) Create stream I0524 21:28:00.447949 6 log.go:172] (0xc0016cd130) (0xc0027a5cc0) Stream added, broadcasting: 5 I0524 21:28:00.449018 6 log.go:172] (0xc0016cd130) Reply frame received for 5 I0524 21:28:01.552313 6 log.go:172] (0xc0016cd130) Data frame received for 3 I0524 21:28:01.552428 6 log.go:172] (0xc0027a5c20) (3) Data frame handling I0524 21:28:01.552544 6 log.go:172] (0xc0027a5c20) (3) Data frame sent I0524 21:28:01.552841 6 log.go:172] (0xc0016cd130) Data frame received for 5 I0524 21:28:01.552870 6 log.go:172] (0xc0027a5cc0) (5) Data frame handling I0524 21:28:01.553363 6 log.go:172] (0xc0016cd130) Data frame received for 3 I0524 21:28:01.553400 6 log.go:172] (0xc0027a5c20) (3) Data frame handling I0524 21:28:01.555639 6 log.go:172] (0xc0016cd130) Data frame received for 1 I0524 21:28:01.555668 6 log.go:172] (0xc001539720) (1) Data frame handling I0524 21:28:01.555686 6 log.go:172] (0xc001539720) (1) Data frame sent I0524 21:28:01.555706 6 log.go:172] (0xc0016cd130) (0xc001539720) Stream removed, broadcasting: 1 I0524 21:28:01.555725 6 log.go:172] (0xc0016cd130) Go away received I0524 21:28:01.555872 6 log.go:172] (0xc0016cd130) (0xc001539720) Stream removed, broadcasting: 1 I0524 21:28:01.555910 6 log.go:172] (0xc0016cd130) (0xc0027a5c20) Stream removed, broadcasting: 3 I0524 21:28:01.555936 6 log.go:172] (0xc0016cd130) (0xc0027a5cc0) Stream removed, broadcasting: 5 May 24 21:28:01.555: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:28:01.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5718" for this suite. • [SLOW TEST:26.535 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":958,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:28:01.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:28:06.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7030" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":57,"skipped":961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:28:06.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 24 21:28:06.276: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7244 /api/v1/namespaces/watch-7244/configmaps/e2e-watch-test-watch-closed e5afc935-3176-451d-806a-d50095870347 18853413 0 2020-05-24 21:28:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 21:28:06.276: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7244 /api/v1/namespaces/watch-7244/configmaps/e2e-watch-test-watch-closed e5afc935-3176-451d-806a-d50095870347 18853414 0 2020-05-24 21:28:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 24 21:28:06.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7244 /api/v1/namespaces/watch-7244/configmaps/e2e-watch-test-watch-closed e5afc935-3176-451d-806a-d50095870347 18853415 0 2020-05-24 21:28:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 21:28:06.297: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7244 /api/v1/namespaces/watch-7244/configmaps/e2e-watch-test-watch-closed e5afc935-3176-451d-806a-d50095870347 18853416 0 2020-05-24 21:28:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:28:06.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7244" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":58,"skipped":993,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:28:06.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 24 21:28:06.379: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:28:23.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3831" for this suite. • [SLOW TEST:16.714 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":59,"skipped":1006,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:28:23.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 24 21:28:23.071: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 21:28:23.082: INFO: Waiting for terminating namespaces to be deleted... May 24 21:28:23.109: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 24 21:28:23.127: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:28:23.127: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:28:23.127: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:28:23.127: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:28:23.127: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 24 21:28:23.147: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:28:23.147: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:28:23.147: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 24 21:28:23.147: INFO: Container kube-bench ready: false, restart count 0 May 24 21:28:23.147: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:28:23.147: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:28:23.147: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 24 21:28:23.147: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 24 21:28:23.264: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 24 21:28:23.264: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 24 21:28:23.264: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 24 21:28:23.264: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 24 21:28:23.264: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 24 21:28:23.269: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-36870a64-2192-45aa-a487-3a71db9050d5.161214160a4da808], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6653/filler-pod-36870a64-2192-45aa-a487-3a71db9050d5 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-36870a64-2192-45aa-a487-3a71db9050d5.161214169e88391e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-36870a64-2192-45aa-a487-3a71db9050d5.16121416e1dfeea1], Reason = [Created], Message = [Created container filler-pod-36870a64-2192-45aa-a487-3a71db9050d5] STEP: Considering event: Type = [Normal], Name = [filler-pod-36870a64-2192-45aa-a487-3a71db9050d5.16121416f034531a], Reason = [Started], Message = [Started container filler-pod-36870a64-2192-45aa-a487-3a71db9050d5] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f93495a-a768-4547-a6e4-3f833c2f127b.1612141608793c47], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6653/filler-pod-6f93495a-a768-4547-a6e4-3f833c2f127b to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f93495a-a768-4547-a6e4-3f833c2f127b.161214165412ec91], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f93495a-a768-4547-a6e4-3f833c2f127b.16121416b336d94c], Reason = [Created], Message = [Created container filler-pod-6f93495a-a768-4547-a6e4-3f833c2f127b] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f93495a-a768-4547-a6e4-3f833c2f127b.16121416c4e6852d], Reason = [Started], Message = [Started container filler-pod-6f93495a-a768-4547-a6e4-3f833c2f127b] STEP: Considering event: Type = [Warning], Name = [additional-pod.1612141770f33676], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:28:30.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6653" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.414 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":60,"skipped":1014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:28:30.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:28:37.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9366" for this suite. STEP: Destroying namespace "nsdeletetest-5229" for this suite. May 24 21:28:37.680: INFO: Namespace nsdeletetest-5229 was already deleted STEP: Destroying namespace "nsdeletetest-6691" for this suite. • [SLOW TEST:7.248 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":61,"skipped":1047,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:28:37.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1141 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-1141 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1141 May 24 21:28:37.822: INFO: Found 0 stateful pods, waiting for 1 May 24 21:28:47.828: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 24 21:28:47.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1141 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 21:28:50.900: INFO: stderr: "I0524 21:28:50.736194 155 log.go:172] (0xc0007feb00) (0xc00061bea0) Create stream\nI0524 21:28:50.736231 155 log.go:172] (0xc0007feb00) (0xc00061bea0) Stream added, broadcasting: 1\nI0524 21:28:50.739153 155 log.go:172] (0xc0007feb00) Reply frame received for 1\nI0524 21:28:50.739205 155 log.go:172] (0xc0007feb00) (0xc0005a2640) Create stream\nI0524 21:28:50.739217 155 log.go:172] (0xc0007feb00) (0xc0005a2640) Stream added, broadcasting: 3\nI0524 21:28:50.740286 155 log.go:172] (0xc0007feb00) Reply frame received for 3\nI0524 21:28:50.740341 155 log.go:172] (0xc0007feb00) (0xc0007ae6e0) Create stream\nI0524 21:28:50.740364 155 log.go:172] (0xc0007feb00) (0xc0007ae6e0) Stream added, broadcasting: 5\nI0524 21:28:50.741776 155 log.go:172] (0xc0007feb00) Reply frame received for 5\nI0524 21:28:50.836057 155 log.go:172] (0xc0007feb00) Data frame received for 5\nI0524 21:28:50.836088 155 log.go:172] (0xc0007ae6e0) (5) Data frame handling\nI0524 21:28:50.836109 155 log.go:172] (0xc0007ae6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 21:28:50.889892 155 log.go:172] (0xc0007feb00) Data frame received for 5\nI0524 21:28:50.889929 155 log.go:172] (0xc0007ae6e0) (5) Data frame handling\nI0524 21:28:50.889961 155 log.go:172] (0xc0007feb00) Data frame received for 3\nI0524 21:28:50.889997 155 log.go:172] (0xc0005a2640) (3) Data frame handling\nI0524 21:28:50.890080 155 log.go:172] (0xc0005a2640) (3) Data frame sent\nI0524 21:28:50.890114 155 log.go:172] (0xc0007feb00) Data frame received for 3\nI0524 21:28:50.890136 155 log.go:172] (0xc0005a2640) (3) Data frame handling\nI0524 21:28:50.892411 155 log.go:172] (0xc0007feb00) Data frame received for 1\nI0524 21:28:50.892449 155 log.go:172] (0xc00061bea0) (1) Data frame handling\nI0524 21:28:50.892482 155 log.go:172] (0xc00061bea0) (1) Data frame sent\nI0524 21:28:50.892553 155 log.go:172] (0xc0007feb00) (0xc00061bea0) Stream removed, broadcasting: 1\nI0524 21:28:50.892577 155 log.go:172] (0xc0007feb00) Go away received\nI0524 21:28:50.893032 155 log.go:172] (0xc0007feb00) (0xc00061bea0) Stream removed, broadcasting: 1\nI0524 21:28:50.893063 155 log.go:172] (0xc0007feb00) (0xc0005a2640) Stream removed, broadcasting: 3\nI0524 21:28:50.893076 155 log.go:172] (0xc0007feb00) (0xc0007ae6e0) Stream removed, broadcasting: 5\n" May 24 21:28:50.900: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 21:28:50.900: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 21:28:50.903: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 21:29:00.908: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 21:29:00.908: INFO: Waiting for statefulset status.replicas updated to 0 May 24 21:29:00.926: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:00.926: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC }] May 24 21:29:00.926: INFO: May 24 21:29:00.926: INFO: StatefulSet ss has not reached scale 3, at 1 May 24 21:29:01.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988663038s May 24 21:29:02.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983839768s May 24 21:29:03.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.929545854s May 24 21:29:04.995: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.924538546s May 24 21:29:06.000: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.919889129s May 24 21:29:07.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.914592171s May 24 21:29:08.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.909042294s May 24 21:29:09.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.903529412s May 24 21:29:10.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 898.222131ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1141 May 24 21:29:11.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1141 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 21:29:11.281: INFO: stderr: "I0524 21:29:11.184978 189 log.go:172] (0xc000104e70) (0xc000aba0a0) Create stream\nI0524 21:29:11.185041 189 log.go:172] (0xc000104e70) (0xc000aba0a0) Stream added, broadcasting: 1\nI0524 21:29:11.188041 189 log.go:172] (0xc000104e70) Reply frame received for 1\nI0524 21:29:11.188082 189 log.go:172] (0xc000104e70) (0xc000aba140) Create stream\nI0524 21:29:11.188093 189 log.go:172] (0xc000104e70) (0xc000aba140) Stream added, broadcasting: 3\nI0524 21:29:11.189031 189 log.go:172] (0xc000104e70) Reply frame received for 3\nI0524 21:29:11.189068 189 log.go:172] (0xc000104e70) (0xc000664780) Create stream\nI0524 21:29:11.189083 189 log.go:172] (0xc000104e70) (0xc000664780) Stream added, broadcasting: 5\nI0524 21:29:11.190256 189 log.go:172] (0xc000104e70) Reply frame received for 5\nI0524 21:29:11.274339 189 log.go:172] (0xc000104e70) Data frame received for 5\nI0524 21:29:11.274369 189 log.go:172] (0xc000664780) (5) Data frame handling\nI0524 21:29:11.274388 189 log.go:172] (0xc000664780) (5) Data frame sent\nI0524 21:29:11.274397 189 log.go:172] (0xc000104e70) Data frame received for 5\nI0524 21:29:11.274406 189 log.go:172] (0xc000664780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 21:29:11.274418 189 log.go:172] (0xc000104e70) Data frame received for 3\nI0524 21:29:11.274444 189 log.go:172] (0xc000aba140) (3) Data frame handling\nI0524 21:29:11.274452 189 log.go:172] (0xc000aba140) (3) Data frame sent\nI0524 21:29:11.274461 189 log.go:172] (0xc000104e70) Data frame received for 3\nI0524 21:29:11.274467 189 log.go:172] (0xc000aba140) (3) Data frame handling\nI0524 21:29:11.275854 189 log.go:172] (0xc000104e70) Data frame received for 1\nI0524 21:29:11.275925 189 log.go:172] (0xc000aba0a0) (1) Data frame handling\nI0524 21:29:11.275945 189 log.go:172] (0xc000aba0a0) (1) Data frame sent\nI0524 21:29:11.275959 189 log.go:172] (0xc000104e70) (0xc000aba0a0) Stream removed, broadcasting: 1\nI0524 21:29:11.275981 189 log.go:172] (0xc000104e70) Go away received\nI0524 21:29:11.276445 189 log.go:172] (0xc000104e70) (0xc000aba0a0) Stream removed, broadcasting: 1\nI0524 21:29:11.276472 189 log.go:172] (0xc000104e70) (0xc000aba140) Stream removed, broadcasting: 3\nI0524 21:29:11.276487 189 log.go:172] (0xc000104e70) (0xc000664780) Stream removed, broadcasting: 5\n" May 24 21:29:11.282: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 21:29:11.282: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 21:29:11.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1141 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 21:29:11.475: INFO: stderr: "I0524 21:29:11.408095 212 log.go:172] (0xc0009ea0b0) (0xc000407680) Create stream\nI0524 21:29:11.408157 212 log.go:172] (0xc0009ea0b0) (0xc000407680) Stream added, broadcasting: 1\nI0524 21:29:11.410707 212 log.go:172] (0xc0009ea0b0) Reply frame received for 1\nI0524 21:29:11.410744 212 log.go:172] (0xc0009ea0b0) (0xc0009b4000) Create stream\nI0524 21:29:11.410758 212 log.go:172] (0xc0009ea0b0) (0xc0009b4000) Stream added, broadcasting: 3\nI0524 21:29:11.411452 212 log.go:172] (0xc0009ea0b0) Reply frame received for 3\nI0524 21:29:11.411484 212 log.go:172] (0xc0009ea0b0) (0xc0009b40a0) Create stream\nI0524 21:29:11.411497 212 log.go:172] (0xc0009ea0b0) (0xc0009b40a0) Stream added, broadcasting: 5\nI0524 21:29:11.412301 212 log.go:172] (0xc0009ea0b0) Reply frame received for 5\nI0524 21:29:11.468177 212 log.go:172] (0xc0009ea0b0) Data frame received for 3\nI0524 21:29:11.468224 212 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0524 21:29:11.468246 212 log.go:172] (0xc0009b4000) (3) Data frame sent\nI0524 21:29:11.468294 212 log.go:172] (0xc0009ea0b0) Data frame received for 5\nI0524 21:29:11.468327 212 log.go:172] (0xc0009b40a0) (5) Data frame handling\nI0524 21:29:11.468342 212 log.go:172] (0xc0009b40a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0524 21:29:11.468366 212 log.go:172] (0xc0009ea0b0) Data frame received for 3\nI0524 21:29:11.468400 212 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0524 21:29:11.468424 212 log.go:172] (0xc0009ea0b0) Data frame received for 5\nI0524 21:29:11.468436 212 log.go:172] (0xc0009b40a0) (5) Data frame handling\nI0524 21:29:11.469821 212 log.go:172] (0xc0009ea0b0) Data frame received for 1\nI0524 21:29:11.469852 212 log.go:172] (0xc000407680) (1) Data frame handling\nI0524 21:29:11.469874 212 log.go:172] (0xc000407680) (1) Data frame sent\nI0524 21:29:11.469902 212 log.go:172] (0xc0009ea0b0) (0xc000407680) Stream removed, broadcasting: 1\nI0524 21:29:11.469937 212 log.go:172] (0xc0009ea0b0) Go away received\nI0524 21:29:11.470441 212 log.go:172] (0xc0009ea0b0) (0xc000407680) Stream removed, broadcasting: 1\nI0524 21:29:11.470467 212 log.go:172] (0xc0009ea0b0) (0xc0009b4000) Stream removed, broadcasting: 3\nI0524 21:29:11.470480 212 log.go:172] (0xc0009ea0b0) (0xc0009b40a0) Stream removed, broadcasting: 5\n" May 24 21:29:11.475: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 21:29:11.475: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 21:29:11.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1141 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 21:29:11.662: INFO: stderr: "I0524 21:29:11.589838 236 log.go:172] (0xc00054eb00) (0xc0004fa000) Create stream\nI0524 21:29:11.589879 236 log.go:172] (0xc00054eb00) (0xc0004fa000) Stream added, broadcasting: 1\nI0524 21:29:11.592154 236 log.go:172] (0xc00054eb00) Reply frame received for 1\nI0524 21:29:11.592199 236 log.go:172] (0xc00054eb00) (0xc0006dba40) Create stream\nI0524 21:29:11.592216 236 log.go:172] (0xc00054eb00) (0xc0006dba40) Stream added, broadcasting: 3\nI0524 21:29:11.593597 236 log.go:172] (0xc00054eb00) Reply frame received for 3\nI0524 21:29:11.593650 236 log.go:172] (0xc00054eb00) (0xc0004fa140) Create stream\nI0524 21:29:11.593665 236 log.go:172] (0xc00054eb00) (0xc0004fa140) Stream added, broadcasting: 5\nI0524 21:29:11.594760 236 log.go:172] (0xc00054eb00) Reply frame received for 5\nI0524 21:29:11.655471 236 log.go:172] (0xc00054eb00) Data frame received for 5\nI0524 21:29:11.655533 236 log.go:172] (0xc0004fa140) (5) Data frame handling\nI0524 21:29:11.655554 236 log.go:172] (0xc0004fa140) (5) Data frame sent\nI0524 21:29:11.655579 236 log.go:172] (0xc00054eb00) Data frame received for 5\nI0524 21:29:11.655602 236 log.go:172] (0xc0004fa140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0524 21:29:11.655623 236 log.go:172] (0xc00054eb00) Data frame received for 3\nI0524 21:29:11.655640 236 log.go:172] (0xc0006dba40) (3) Data frame handling\nI0524 21:29:11.655666 236 log.go:172] (0xc0006dba40) (3) Data frame sent\nI0524 21:29:11.655686 236 log.go:172] (0xc00054eb00) Data frame received for 3\nI0524 21:29:11.655699 236 log.go:172] (0xc0006dba40) (3) Data frame handling\nI0524 21:29:11.657480 236 log.go:172] (0xc00054eb00) Data frame received for 1\nI0524 21:29:11.657506 236 log.go:172] (0xc0004fa000) (1) Data frame handling\nI0524 21:29:11.657525 236 log.go:172] (0xc0004fa000) (1) Data frame sent\nI0524 21:29:11.657553 236 log.go:172] (0xc00054eb00) (0xc0004fa000) Stream removed, broadcasting: 1\nI0524 21:29:11.657683 236 log.go:172] (0xc00054eb00) Go away received\nI0524 21:29:11.657879 236 log.go:172] (0xc00054eb00) (0xc0004fa000) Stream removed, broadcasting: 1\nI0524 21:29:11.657896 236 log.go:172] (0xc00054eb00) (0xc0006dba40) Stream removed, broadcasting: 3\nI0524 21:29:11.657910 236 log.go:172] (0xc00054eb00) (0xc0004fa140) Stream removed, broadcasting: 5\n" May 24 21:29:11.662: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 21:29:11.662: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 21:29:11.666: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 24 21:29:21.672: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 21:29:21.672: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 21:29:21.672: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 24 21:29:21.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1141 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 21:29:21.900: INFO: stderr: "I0524 21:29:21.808804 259 log.go:172] (0xc000114f20) (0xc000546000) Create stream\nI0524 21:29:21.808848 259 log.go:172] (0xc000114f20) (0xc000546000) Stream added, broadcasting: 1\nI0524 21:29:21.811017 259 log.go:172] (0xc000114f20) Reply frame received for 1\nI0524 21:29:21.811107 259 log.go:172] (0xc000114f20) (0xc000651a40) Create stream\nI0524 21:29:21.811132 259 log.go:172] (0xc000114f20) (0xc000651a40) Stream added, broadcasting: 3\nI0524 21:29:21.812173 259 log.go:172] (0xc000114f20) Reply frame received for 3\nI0524 21:29:21.812206 259 log.go:172] (0xc000114f20) (0xc0005460a0) Create stream\nI0524 21:29:21.812218 259 log.go:172] (0xc000114f20) (0xc0005460a0) Stream added, broadcasting: 5\nI0524 21:29:21.813389 259 log.go:172] (0xc000114f20) Reply frame received for 5\nI0524 21:29:21.892693 259 log.go:172] (0xc000114f20) Data frame received for 5\nI0524 21:29:21.892728 259 log.go:172] (0xc0005460a0) (5) Data frame handling\nI0524 21:29:21.892741 259 log.go:172] (0xc0005460a0) (5) Data frame sent\nI0524 21:29:21.892749 259 log.go:172] (0xc000114f20) Data frame received for 5\nI0524 21:29:21.892755 259 log.go:172] (0xc0005460a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 21:29:21.892777 259 log.go:172] (0xc000114f20) Data frame received for 3\nI0524 21:29:21.892786 259 log.go:172] (0xc000651a40) (3) Data frame handling\nI0524 21:29:21.892806 259 log.go:172] (0xc000651a40) (3) Data frame sent\nI0524 21:29:21.892816 259 log.go:172] (0xc000114f20) Data frame received for 3\nI0524 21:29:21.892826 259 log.go:172] (0xc000651a40) (3) Data frame handling\nI0524 21:29:21.894313 259 log.go:172] (0xc000114f20) Data frame received for 1\nI0524 21:29:21.894341 259 log.go:172] (0xc000546000) (1) Data frame handling\nI0524 21:29:21.894361 259 log.go:172] (0xc000546000) (1) Data frame sent\nI0524 21:29:21.894387 259 log.go:172] (0xc000114f20) (0xc000546000) Stream removed, broadcasting: 1\nI0524 21:29:21.894517 259 log.go:172] (0xc000114f20) Go away received\nI0524 21:29:21.894731 259 log.go:172] (0xc000114f20) (0xc000546000) Stream removed, broadcasting: 1\nI0524 21:29:21.894748 259 log.go:172] (0xc000114f20) (0xc000651a40) Stream removed, broadcasting: 3\nI0524 21:29:21.894756 259 log.go:172] (0xc000114f20) (0xc0005460a0) Stream removed, broadcasting: 5\n" May 24 21:29:21.900: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 21:29:21.900: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 21:29:21.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1141 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 21:29:22.142: INFO: stderr: "I0524 21:29:22.026559 282 log.go:172] (0xc000a1e000) (0xc000a00000) Create stream\nI0524 21:29:22.026625 282 log.go:172] (0xc000a1e000) (0xc000a00000) Stream added, broadcasting: 1\nI0524 21:29:22.029501 282 log.go:172] (0xc000a1e000) Reply frame received for 1\nI0524 21:29:22.029571 282 log.go:172] (0xc000a1e000) (0xc000a86000) Create stream\nI0524 21:29:22.029605 282 log.go:172] (0xc000a1e000) (0xc000a86000) Stream added, broadcasting: 3\nI0524 21:29:22.030550 282 log.go:172] (0xc000a1e000) Reply frame received for 3\nI0524 21:29:22.030585 282 log.go:172] (0xc000a1e000) (0xc000a860a0) Create stream\nI0524 21:29:22.030598 282 log.go:172] (0xc000a1e000) (0xc000a860a0) Stream added, broadcasting: 5\nI0524 21:29:22.031638 282 log.go:172] (0xc000a1e000) Reply frame received for 5\nI0524 21:29:22.106134 282 log.go:172] (0xc000a1e000) Data frame received for 5\nI0524 21:29:22.106164 282 log.go:172] (0xc000a860a0) (5) Data frame handling\nI0524 21:29:22.106191 282 log.go:172] (0xc000a860a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 21:29:22.134463 282 log.go:172] (0xc000a1e000) Data frame received for 3\nI0524 21:29:22.134497 282 log.go:172] (0xc000a86000) (3) Data frame handling\nI0524 21:29:22.134536 282 log.go:172] (0xc000a86000) (3) Data frame sent\nI0524 21:29:22.134940 282 log.go:172] (0xc000a1e000) Data frame received for 5\nI0524 21:29:22.134954 282 log.go:172] (0xc000a860a0) (5) Data frame handling\nI0524 21:29:22.134986 282 log.go:172] (0xc000a1e000) Data frame received for 3\nI0524 21:29:22.135013 282 log.go:172] (0xc000a86000) (3) Data frame handling\nI0524 21:29:22.136500 282 log.go:172] (0xc000a1e000) Data frame received for 1\nI0524 21:29:22.136517 282 log.go:172] (0xc000a00000) (1) Data frame handling\nI0524 21:29:22.136535 282 log.go:172] (0xc000a00000) (1) Data frame sent\nI0524 21:29:22.136667 282 log.go:172] (0xc000a1e000) (0xc000a00000) Stream removed, broadcasting: 1\nI0524 21:29:22.136685 282 log.go:172] (0xc000a1e000) Go away received\nI0524 21:29:22.137318 282 log.go:172] (0xc000a1e000) (0xc000a00000) Stream removed, broadcasting: 1\nI0524 21:29:22.137354 282 log.go:172] (0xc000a1e000) (0xc000a86000) Stream removed, broadcasting: 3\nI0524 21:29:22.137368 282 log.go:172] (0xc000a1e000) (0xc000a860a0) Stream removed, broadcasting: 5\n" May 24 21:29:22.142: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 21:29:22.142: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 21:29:22.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1141 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 21:29:22.368: INFO: stderr: "I0524 21:29:22.268457 302 log.go:172] (0xc0000f5340) (0xc0006af9a0) Create stream\nI0524 21:29:22.268511 302 log.go:172] (0xc0000f5340) (0xc0006af9a0) Stream added, broadcasting: 1\nI0524 21:29:22.271457 302 log.go:172] (0xc0000f5340) Reply frame received for 1\nI0524 21:29:22.271524 302 log.go:172] (0xc0000f5340) (0xc000bea000) Create stream\nI0524 21:29:22.271545 302 log.go:172] (0xc0000f5340) (0xc000bea000) Stream added, broadcasting: 3\nI0524 21:29:22.272419 302 log.go:172] (0xc0000f5340) Reply frame received for 3\nI0524 21:29:22.272460 302 log.go:172] (0xc0000f5340) (0xc000bea0a0) Create stream\nI0524 21:29:22.272475 302 log.go:172] (0xc0000f5340) (0xc000bea0a0) Stream added, broadcasting: 5\nI0524 21:29:22.273511 302 log.go:172] (0xc0000f5340) Reply frame received for 5\nI0524 21:29:22.328798 302 log.go:172] (0xc0000f5340) Data frame received for 5\nI0524 21:29:22.328831 302 log.go:172] (0xc000bea0a0) (5) Data frame handling\nI0524 21:29:22.328851 302 log.go:172] (0xc000bea0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 21:29:22.360826 302 log.go:172] (0xc0000f5340) Data frame received for 3\nI0524 21:29:22.360956 302 log.go:172] (0xc000bea000) (3) Data frame handling\nI0524 21:29:22.360972 302 log.go:172] (0xc000bea000) (3) Data frame sent\nI0524 21:29:22.360978 302 log.go:172] (0xc0000f5340) Data frame received for 3\nI0524 21:29:22.360990 302 log.go:172] (0xc000bea000) (3) Data frame handling\nI0524 21:29:22.361013 302 log.go:172] (0xc0000f5340) Data frame received for 5\nI0524 21:29:22.361036 302 log.go:172] (0xc000bea0a0) (5) Data frame handling\nI0524 21:29:22.362753 302 log.go:172] (0xc0000f5340) Data frame received for 1\nI0524 21:29:22.362787 302 log.go:172] (0xc0006af9a0) (1) Data frame handling\nI0524 21:29:22.362805 302 log.go:172] (0xc0006af9a0) (1) Data frame sent\nI0524 21:29:22.362821 302 log.go:172] (0xc0000f5340) (0xc0006af9a0) Stream removed, broadcasting: 1\nI0524 21:29:22.362837 302 log.go:172] (0xc0000f5340) Go away received\nI0524 21:29:22.363112 302 log.go:172] (0xc0000f5340) (0xc0006af9a0) Stream removed, broadcasting: 1\nI0524 21:29:22.363124 302 log.go:172] (0xc0000f5340) (0xc000bea000) Stream removed, broadcasting: 3\nI0524 21:29:22.363129 302 log.go:172] (0xc0000f5340) (0xc000bea0a0) Stream removed, broadcasting: 5\n" May 24 21:29:22.368: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 21:29:22.368: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 21:29:22.368: INFO: Waiting for statefulset status.replicas updated to 0 May 24 21:29:22.371: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 24 21:29:32.380: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 21:29:32.380: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 21:29:32.380: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 21:29:32.394: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:32.394: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC }] May 24 21:29:32.394: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:32.394: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:32.394: INFO: May 24 21:29:32.394: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 21:29:33.399: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:33.399: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC }] May 24 21:29:33.399: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:33.399: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:33.399: INFO: May 24 21:29:33.399: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 21:29:34.405: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:34.405: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC }] May 24 21:29:34.405: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:34.405: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:34.405: INFO: May 24 21:29:34.405: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 21:29:35.412: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:35.412: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:28:37 +0000 UTC }] May 24 21:29:35.412: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:35.412: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:35.412: INFO: May 24 21:29:35.412: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 21:29:36.416: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:36.416: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:36.416: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:36.416: INFO: May 24 21:29:36.416: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 21:29:37.421: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:37.421: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:37.421: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:37.421: INFO: May 24 21:29:37.421: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 21:29:38.425: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:38.425: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:38.425: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:38.425: INFO: May 24 21:29:38.425: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 21:29:39.430: INFO: POD NODE PHASE GRACE CONDITIONS May 24 21:29:39.430: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:39.430: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 21:29:00 +0000 UTC }] May 24 21:29:39.430: INFO: May 24 21:29:39.430: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 21:29:40.434: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.956869733s May 24 21:29:41.438: INFO: Verifying statefulset ss doesn't scale past 0 for another 952.681577ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1141 May 24 21:29:42.442: INFO: Scaling statefulset ss to 0 May 24 21:29:42.454: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 24 21:29:42.456: INFO: Deleting all statefulset in ns statefulset-1141 May 24 21:29:42.458: INFO: Scaling statefulset ss to 0 May 24 21:29:42.465: INFO: Waiting for statefulset status.replicas updated to 0 May 24 21:29:42.467: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:29:42.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1141" for this suite. • [SLOW TEST:64.815 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":62,"skipped":1066,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:29:42.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6af8402f-25aa-4bd6-90b7-dd349239c315 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6af8402f-25aa-4bd6-90b7-dd349239c315 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:29:48.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4606" for this suite. • [SLOW TEST:6.142 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1073,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:29:48.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 24 21:29:48.700: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 21:29:48.719: INFO: Waiting for terminating namespaces to be deleted... May 24 21:29:48.722: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 24 21:29:48.728: INFO: pod-projected-configmaps-ad55e76a-030f-4376-8cc4-c63776c811ac from projected-4606 started at 2020-05-24 21:29:42 +0000 UTC (1 container statuses recorded) May 24 21:29:48.728: INFO: Container projected-configmap-volume-test ready: true, restart count 0 May 24 21:29:48.728: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:29:48.728: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:29:48.728: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:29:48.728: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:29:48.728: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 24 21:29:48.755: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:29:48.755: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:29:48.755: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 24 21:29:48.755: INFO: Container kube-bench ready: false, restart count 0 May 24 21:29:48.755: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:29:48.755: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:29:48.755: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 24 21:29:48.755: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-00acae60-5329-4acd-b3d6-ec7a2fac81b3 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-00acae60-5329-4acd-b3d6-ec7a2fac81b3 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-00acae60-5329-4acd-b3d6-ec7a2fac81b3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:34:56.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-136" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.356 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":64,"skipped":1079,"failed":0} [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:34:56.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:34:57.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6" in namespace "downward-api-3933" to be "success or failure" May 24 21:34:57.119: INFO: Pod "downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.435164ms May 24 21:34:59.123: INFO: Pod "downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008780836s May 24 21:35:01.127: INFO: Pod "downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6": Phase="Running", Reason="", readiness=true. Elapsed: 4.012874664s May 24 21:35:03.130: INFO: Pod "downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015708827s STEP: Saw pod success May 24 21:35:03.130: INFO: Pod "downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6" satisfied condition "success or failure" May 24 21:35:03.132: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6 container client-container: STEP: delete the pod May 24 21:35:03.247: INFO: Waiting for pod downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6 to disappear May 24 21:35:03.252: INFO: Pod downwardapi-volume-6f6b5aa5-85d7-4adc-9d35-8a0cf55200d6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:35:03.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3933" for this suite. • [SLOW TEST:6.262 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1079,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:35:03.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:35:03.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88" in namespace "downward-api-5503" to be "success or failure" May 24 21:35:03.386: INFO: Pod "downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88": Phase="Pending", Reason="", readiness=false. Elapsed: 64.637933ms May 24 21:35:05.402: INFO: Pod "downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081149793s May 24 21:35:07.415: INFO: Pod "downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093508531s STEP: Saw pod success May 24 21:35:07.415: INFO: Pod "downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88" satisfied condition "success or failure" May 24 21:35:07.418: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88 container client-container: STEP: delete the pod May 24 21:35:07.456: INFO: Waiting for pod downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88 to disappear May 24 21:35:07.470: INFO: Pod downwardapi-volume-02e66ef2-ff02-44bb-a330-e19b863afe88 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:35:07.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5503" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1099,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:35:07.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 24 21:35:07.821: INFO: Pod name pod-release: Found 0 pods out of 1 May 24 21:35:12.840: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:35:13.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9828" for this suite. • [SLOW TEST:6.389 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":67,"skipped":1108,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:35:13.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:35:14.029: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:35:20.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8945" for this suite. • [SLOW TEST:7.025 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":68,"skipped":1126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:35:20.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 24 21:35:25.095: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 24 21:35:40.200: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:35:40.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3838" for this suite. • [SLOW TEST:19.318 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":69,"skipped":1160,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:35:40.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:35:40.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2" in namespace "projected-7355" to be "success or failure" May 24 21:35:40.328: INFO: Pod "downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.091719ms May 24 21:35:42.391: INFO: Pod "downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074896178s May 24 21:35:44.396: INFO: Pod "downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079393586s STEP: Saw pod success May 24 21:35:44.396: INFO: Pod "downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2" satisfied condition "success or failure" May 24 21:35:44.399: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2 container client-container: STEP: delete the pod May 24 21:35:44.444: INFO: Waiting for pod downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2 to disappear May 24 21:35:44.456: INFO: Pod downwardapi-volume-8da05dee-eac2-4b33-b291-250c8e6095e2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:35:44.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7355" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1170,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:35:44.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 24 21:35:49.616: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:35:49.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6437" for this suite. • [SLOW TEST:5.291 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":71,"skipped":1184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:35:49.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 24 21:35:49.925: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:49.930: INFO: Number of nodes with available pods: 0 May 24 21:35:49.930: INFO: Node jerma-worker is running more than one daemon pod May 24 21:35:50.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:50.938: INFO: Number of nodes with available pods: 0 May 24 21:35:50.938: INFO: Node jerma-worker is running more than one daemon pod May 24 21:35:52.010: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:52.014: INFO: Number of nodes with available pods: 0 May 24 21:35:52.014: INFO: Node jerma-worker is running more than one daemon pod May 24 21:35:52.962: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:52.999: INFO: Number of nodes with available pods: 0 May 24 21:35:52.999: INFO: Node jerma-worker is running more than one daemon pod May 24 21:35:53.936: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:53.940: INFO: Number of nodes with available pods: 0 May 24 21:35:53.940: INFO: Node jerma-worker is running more than one daemon pod May 24 21:35:54.933: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:54.935: INFO: Number of nodes with available pods: 0 May 24 21:35:54.935: INFO: Node jerma-worker is running more than one daemon pod May 24 21:35:55.934: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:55.938: INFO: Number of nodes with available pods: 2 May 24 21:35:55.938: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 24 21:35:56.012: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:56.062: INFO: Number of nodes with available pods: 1 May 24 21:35:56.062: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:35:57.087: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:57.091: INFO: Number of nodes with available pods: 1 May 24 21:35:57.091: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:35:58.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:58.072: INFO: Number of nodes with available pods: 1 May 24 21:35:58.072: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:35:59.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:35:59.070: INFO: Number of nodes with available pods: 1 May 24 21:35:59.070: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:00.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:00.072: INFO: Number of nodes with available pods: 1 May 24 21:36:00.072: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:01.072: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:01.075: INFO: Number of nodes with available pods: 1 May 24 21:36:01.075: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:02.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:02.071: INFO: Number of nodes with available pods: 1 May 24 21:36:02.071: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:03.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:03.077: INFO: Number of nodes with available pods: 1 May 24 21:36:03.077: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:04.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:04.072: INFO: Number of nodes with available pods: 1 May 24 21:36:04.072: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:05.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:05.071: INFO: Number of nodes with available pods: 1 May 24 21:36:05.071: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:06.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:06.071: INFO: Number of nodes with available pods: 1 May 24 21:36:06.071: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:07.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:07.071: INFO: Number of nodes with available pods: 1 May 24 21:36:07.071: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:08.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:08.072: INFO: Number of nodes with available pods: 1 May 24 21:36:08.072: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:09.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:09.071: INFO: Number of nodes with available pods: 1 May 24 21:36:09.071: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:10.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:10.072: INFO: Number of nodes with available pods: 1 May 24 21:36:10.072: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:11.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:11.072: INFO: Number of nodes with available pods: 1 May 24 21:36:11.072: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:12.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:12.070: INFO: Number of nodes with available pods: 1 May 24 21:36:12.071: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:13.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:13.072: INFO: Number of nodes with available pods: 1 May 24 21:36:13.072: INFO: Node jerma-worker2 is running more than one daemon pod May 24 21:36:14.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 21:36:14.072: INFO: Number of nodes with available pods: 2 May 24 21:36:14.072: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3773, will wait for the garbage collector to delete the pods May 24 21:36:14.134: INFO: Deleting DaemonSet.extensions daemon-set took: 6.387824ms May 24 21:36:14.434: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.249952ms May 24 21:36:29.547: INFO: Number of nodes with available pods: 0 May 24 21:36:29.547: INFO: Number of running nodes: 0, number of available pods: 0 May 24 21:36:29.550: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3773/daemonsets","resourceVersion":"18855580"},"items":null} May 24 21:36:29.553: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3773/pods","resourceVersion":"18855580"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:36:29.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3773" for this suite. • [SLOW TEST:39.810 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":72,"skipped":1241,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:36:29.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-ebe03412-9efd-4f52-9aa3-e0ee4ce4061f in namespace container-probe-4136 May 24 21:36:33.680: INFO: Started pod test-webserver-ebe03412-9efd-4f52-9aa3-e0ee4ce4061f in namespace container-probe-4136 STEP: checking the pod's current state and verifying that restartCount is present May 24 21:36:33.684: INFO: Initial restart count of pod test-webserver-ebe03412-9efd-4f52-9aa3-e0ee4ce4061f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:40:34.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4136" for this suite. • [SLOW TEST:244.736 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1247,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:40:34.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-0bd00111-2412-410a-94ad-38d8946a2f8a STEP: Creating a pod to test consume configMaps May 24 21:40:34.405: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11" in namespace "configmap-4625" to be "success or failure" May 24 21:40:34.696: INFO: Pod "pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11": Phase="Pending", Reason="", readiness=false. Elapsed: 291.219965ms May 24 21:40:36.700: INFO: Pod "pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295066956s May 24 21:40:38.704: INFO: Pod "pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.299147687s STEP: Saw pod success May 24 21:40:38.704: INFO: Pod "pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11" satisfied condition "success or failure" May 24 21:40:38.708: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11 container configmap-volume-test: STEP: delete the pod May 24 21:40:38.739: INFO: Waiting for pod pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11 to disappear May 24 21:40:38.744: INFO: Pod pod-configmaps-1f16b877-34e7-4887-8971-497e18efea11 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:40:38.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4625" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1259,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:40:38.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:40:38.871: INFO: Creating deployment "test-recreate-deployment" May 24 21:40:38.897: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 24 21:40:38.918: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 24 21:40:41.037: INFO: Waiting deployment "test-recreate-deployment" to complete May 24 21:40:41.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953239, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953239, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953239, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:40:43.044: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 24 21:40:43.049: INFO: Updating deployment test-recreate-deployment May 24 21:40:43.049: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 24 21:40:43.549: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1000 /apis/apps/v1/namespaces/deployment-1000/deployments/test-recreate-deployment c9e06324-bab1-465f-9717-132138b7ecfe 18856421 2 2020-05-24 21:40:38 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033f4338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-24 21:40:43 +0000 UTC,LastTransitionTime:2020-05-24 21:40:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-24 21:40:43 +0000 UTC,LastTransitionTime:2020-05-24 21:40:38 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 24 21:40:43.608: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1000 /apis/apps/v1/namespaces/deployment-1000/replicasets/test-recreate-deployment-5f94c574ff 897e8b30-6d88-446e-8848-a87b365dccc7 18856419 1 2020-05-24 21:40:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c9e06324-bab1-465f-9717-132138b7ecfe 0xc0033f46c7 0xc0033f46c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033f4728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 21:40:43.608: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 24 21:40:43.608: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1000 /apis/apps/v1/namespaces/deployment-1000/replicasets/test-recreate-deployment-799c574856 4089b2c7-f957-46d8-bdbd-c5e07e201b0f 18856410 2 2020-05-24 21:40:38 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c9e06324-bab1-465f-9717-132138b7ecfe 0xc0033f4797 0xc0033f4798}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033f4808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 21:40:43.611: INFO: Pod "test-recreate-deployment-5f94c574ff-ffkv8" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-ffkv8 test-recreate-deployment-5f94c574ff- deployment-1000 /api/v1/namespaces/deployment-1000/pods/test-recreate-deployment-5f94c574ff-ffkv8 0e98d952-fcf9-482a-bf98-de4aa3f0098f 18856422 0 2020-05-24 21:40:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 897e8b30-6d88-446e-8848-a87b365dccc7 0xc001b46d87 0xc001b46d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sd2jq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sd2jq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sd2jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:40:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:40:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:40:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:40:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-24 21:40:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:40:43.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1000" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":75,"skipped":1271,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:40:43.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-ef02d3db-1445-4a93-930a-d6ee1d3c1a0e [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:40:43.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7172" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":76,"skipped":1280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:40:43.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ee96c81e-c599-45c3-aa7c-bdac19eff0d6 STEP: Creating a pod to test consume secrets May 24 21:40:43.946: INFO: Waiting up to 5m0s for pod "pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63" in namespace "secrets-354" to be "success or failure" May 24 21:40:44.121: INFO: Pod "pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63": Phase="Pending", Reason="", readiness=false. Elapsed: 175.428938ms May 24 21:40:46.126: INFO: Pod "pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179740829s May 24 21:40:48.129: INFO: Pod "pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183658036s STEP: Saw pod success May 24 21:40:48.130: INFO: Pod "pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63" satisfied condition "success or failure" May 24 21:40:48.133: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63 container secret-volume-test: STEP: delete the pod May 24 21:40:48.159: INFO: Waiting for pod pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63 to disappear May 24 21:40:48.163: INFO: Pod pod-secrets-f92759e8-5b7f-42b9-aad4-884a40990f63 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:40:48.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-354" for this suite. STEP: Destroying namespace "secret-namespace-9215" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1319,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:40:48.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 24 21:40:48.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9832' May 24 21:40:52.024: INFO: stderr: "" May 24 21:40:52.024: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 21:40:52.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9832' May 24 21:40:52.166: INFO: stderr: "" May 24 21:40:52.166: INFO: stdout: "update-demo-nautilus-cpd75 update-demo-nautilus-mrlm8 " May 24 21:40:52.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpd75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9832' May 24 21:40:52.264: INFO: stderr: "" May 24 21:40:52.264: INFO: stdout: "" May 24 21:40:52.264: INFO: update-demo-nautilus-cpd75 is created but not running May 24 21:40:57.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9832' May 24 21:40:57.369: INFO: stderr: "" May 24 21:40:57.369: INFO: stdout: "update-demo-nautilus-cpd75 update-demo-nautilus-mrlm8 " May 24 21:40:57.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpd75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9832' May 24 21:40:57.454: INFO: stderr: "" May 24 21:40:57.454: INFO: stdout: "true" May 24 21:40:57.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpd75 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9832' May 24 21:40:57.552: INFO: stderr: "" May 24 21:40:57.552: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:40:57.552: INFO: validating pod update-demo-nautilus-cpd75 May 24 21:40:57.562: INFO: got data: { "image": "nautilus.jpg" } May 24 21:40:57.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:40:57.562: INFO: update-demo-nautilus-cpd75 is verified up and running May 24 21:40:57.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrlm8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9832' May 24 21:40:57.653: INFO: stderr: "" May 24 21:40:57.653: INFO: stdout: "true" May 24 21:40:57.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrlm8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9832' May 24 21:40:57.746: INFO: stderr: "" May 24 21:40:57.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:40:57.746: INFO: validating pod update-demo-nautilus-mrlm8 May 24 21:40:57.753: INFO: got data: { "image": "nautilus.jpg" } May 24 21:40:57.753: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:40:57.753: INFO: update-demo-nautilus-mrlm8 is verified up and running STEP: using delete to clean up resources May 24 21:40:57.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9832' May 24 21:40:57.845: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:40:57.845: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 24 21:40:57.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9832' May 24 21:40:57.944: INFO: stderr: "No resources found in kubectl-9832 namespace.\n" May 24 21:40:57.944: INFO: stdout: "" May 24 21:40:57.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9832 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 21:40:58.043: INFO: stderr: "" May 24 21:40:58.043: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:40:58.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9832" for this suite. • [SLOW TEST:9.860 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":78,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:40:58.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 24 21:40:58.209: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 24 21:41:09.625: INFO: >>> kubeConfig: /root/.kube/config May 24 21:41:11.609: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:41:22.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9187" for this suite. • [SLOW TEST:24.004 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":79,"skipped":1341,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:41:22.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 21:41:25.179: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:41:25.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6631" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1343,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:41:25.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 24 21:41:32.120: INFO: Successfully updated pod "adopt-release-gg7w7" STEP: Checking that the Job readopts the Pod May 24 21:41:32.120: INFO: Waiting up to 15m0s for pod "adopt-release-gg7w7" in namespace "job-4756" to be "adopted" May 24 21:41:32.139: INFO: Pod "adopt-release-gg7w7": Phase="Running", Reason="", readiness=true. Elapsed: 18.737473ms May 24 21:41:34.144: INFO: Pod "adopt-release-gg7w7": Phase="Running", Reason="", readiness=true. Elapsed: 2.023758434s May 24 21:41:34.144: INFO: Pod "adopt-release-gg7w7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 24 21:41:34.653: INFO: Successfully updated pod "adopt-release-gg7w7" STEP: Checking that the Job releases the Pod May 24 21:41:34.653: INFO: Waiting up to 15m0s for pod "adopt-release-gg7w7" in namespace "job-4756" to be "released" May 24 21:41:34.733: INFO: Pod "adopt-release-gg7w7": Phase="Running", Reason="", readiness=true. Elapsed: 79.60255ms May 24 21:41:36.736: INFO: Pod "adopt-release-gg7w7": Phase="Running", Reason="", readiness=true. Elapsed: 2.082544183s May 24 21:41:36.736: INFO: Pod "adopt-release-gg7w7" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:41:36.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4756" for this suite. • [SLOW TEST:11.314 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":81,"skipped":1344,"failed":0} SSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:41:36.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:41:37.140: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4a45aa75-ea80-4b49-ae0e-7510b8052c9a" in namespace "security-context-test-6263" to be "success or failure" May 24 21:41:37.205: INFO: Pod "alpine-nnp-false-4a45aa75-ea80-4b49-ae0e-7510b8052c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 65.026661ms May 24 21:41:39.208: INFO: Pod "alpine-nnp-false-4a45aa75-ea80-4b49-ae0e-7510b8052c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067755584s May 24 21:41:41.332: INFO: Pod "alpine-nnp-false-4a45aa75-ea80-4b49-ae0e-7510b8052c9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.191206917s May 24 21:41:41.332: INFO: Pod "alpine-nnp-false-4a45aa75-ea80-4b49-ae0e-7510b8052c9a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:41:41.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6263" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1348,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:41:41.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-df057a01-4640-4ba0-8ea7-06de5f781318 STEP: Creating a pod to test consume secrets May 24 21:41:41.600: INFO: Waiting up to 5m0s for pod "pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f" in namespace "secrets-3754" to be "success or failure" May 24 21:41:41.603: INFO: Pod "pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576768ms May 24 21:41:43.607: INFO: Pod "pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006710306s May 24 21:41:45.611: INFO: Pod "pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011011477s STEP: Saw pod success May 24 21:41:45.611: INFO: Pod "pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f" satisfied condition "success or failure" May 24 21:41:45.614: INFO: Trying to get logs from node jerma-worker pod pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f container secret-volume-test: STEP: delete the pod May 24 21:41:45.688: INFO: Waiting for pod pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f to disappear May 24 21:41:45.711: INFO: Pod pod-secrets-5422c9b5-41ed-489f-8c17-b3e83b0d544f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:41:45.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3754" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1358,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:41:45.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:41:46.499: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:41:48.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:41:50.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953306, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:41:53.565: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:41:53.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4148-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:41:54.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7008" for this suite. STEP: Destroying namespace "webhook-7008-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.085 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":84,"skipped":1369,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:41:54.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-7hm8 STEP: Creating a pod to test atomic-volume-subpath May 24 21:41:54.904: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7hm8" in namespace "subpath-2700" to be "success or failure" May 24 21:41:54.919: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.136576ms May 24 21:41:56.924: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020138545s May 24 21:41:58.929: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 4.025155462s May 24 21:42:00.933: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 6.029579797s May 24 21:42:02.938: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 8.034046793s May 24 21:42:04.941: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 10.037612246s May 24 21:42:06.945: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 12.04120449s May 24 21:42:08.949: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 14.045810773s May 24 21:42:10.953: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 16.049646805s May 24 21:42:13.014: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 18.110588165s May 24 21:42:15.019: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 20.115017986s May 24 21:42:17.026: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Running", Reason="", readiness=true. Elapsed: 22.122580714s May 24 21:42:19.030: INFO: Pod "pod-subpath-test-secret-7hm8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.126642246s STEP: Saw pod success May 24 21:42:19.030: INFO: Pod "pod-subpath-test-secret-7hm8" satisfied condition "success or failure" May 24 21:42:19.033: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-7hm8 container test-container-subpath-secret-7hm8: STEP: delete the pod May 24 21:42:19.060: INFO: Waiting for pod pod-subpath-test-secret-7hm8 to disappear May 24 21:42:19.110: INFO: Pod pod-subpath-test-secret-7hm8 no longer exists STEP: Deleting pod pod-subpath-test-secret-7hm8 May 24 21:42:19.110: INFO: Deleting pod "pod-subpath-test-secret-7hm8" in namespace "subpath-2700" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:42:19.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2700" for this suite. • [SLOW TEST:24.319 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":85,"skipped":1386,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:42:19.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 24 21:42:19.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5379' May 24 21:42:20.771: INFO: stderr: "" May 24 21:42:20.771: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 21:42:20.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5379' May 24 21:42:20.935: INFO: stderr: "" May 24 21:42:20.935: INFO: stdout: "update-demo-nautilus-tch85 update-demo-nautilus-w8w5d " May 24 21:42:20.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tch85 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:21.026: INFO: stderr: "" May 24 21:42:21.026: INFO: stdout: "" May 24 21:42:21.026: INFO: update-demo-nautilus-tch85 is created but not running May 24 21:42:26.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5379' May 24 21:42:26.126: INFO: stderr: "" May 24 21:42:26.126: INFO: stdout: "update-demo-nautilus-tch85 update-demo-nautilus-w8w5d " May 24 21:42:26.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tch85 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:26.216: INFO: stderr: "" May 24 21:42:26.216: INFO: stdout: "true" May 24 21:42:26.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tch85 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:26.308: INFO: stderr: "" May 24 21:42:26.308: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:42:26.308: INFO: validating pod update-demo-nautilus-tch85 May 24 21:42:26.317: INFO: got data: { "image": "nautilus.jpg" } May 24 21:42:26.317: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:42:26.317: INFO: update-demo-nautilus-tch85 is verified up and running May 24 21:42:26.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w5d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:26.414: INFO: stderr: "" May 24 21:42:26.414: INFO: stdout: "true" May 24 21:42:26.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w5d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:26.502: INFO: stderr: "" May 24 21:42:26.502: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:42:26.502: INFO: validating pod update-demo-nautilus-w8w5d May 24 21:42:26.519: INFO: got data: { "image": "nautilus.jpg" } May 24 21:42:26.519: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:42:26.519: INFO: update-demo-nautilus-w8w5d is verified up and running STEP: rolling-update to new replication controller May 24 21:42:26.522: INFO: scanned /root for discovery docs: May 24 21:42:26.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5379' May 24 21:42:49.199: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 24 21:42:49.199: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 21:42:49.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5379' May 24 21:42:49.301: INFO: stderr: "" May 24 21:42:49.301: INFO: stdout: "update-demo-kitten-fftmj update-demo-kitten-z6th5 update-demo-nautilus-tch85 " STEP: Replicas for name=update-demo: expected=2 actual=3 May 24 21:42:54.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5379' May 24 21:42:54.409: INFO: stderr: "" May 24 21:42:54.409: INFO: stdout: "update-demo-kitten-fftmj update-demo-kitten-z6th5 " May 24 21:42:54.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fftmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:54.498: INFO: stderr: "" May 24 21:42:54.498: INFO: stdout: "true" May 24 21:42:54.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fftmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:54.586: INFO: stderr: "" May 24 21:42:54.586: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 24 21:42:54.586: INFO: validating pod update-demo-kitten-fftmj May 24 21:42:54.600: INFO: got data: { "image": "kitten.jpg" } May 24 21:42:54.600: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 24 21:42:54.600: INFO: update-demo-kitten-fftmj is verified up and running May 24 21:42:54.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z6th5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:54.691: INFO: stderr: "" May 24 21:42:54.691: INFO: stdout: "true" May 24 21:42:54.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z6th5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5379' May 24 21:42:54.791: INFO: stderr: "" May 24 21:42:54.791: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 24 21:42:54.791: INFO: validating pod update-demo-kitten-z6th5 May 24 21:42:54.803: INFO: got data: { "image": "kitten.jpg" } May 24 21:42:54.803: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 24 21:42:54.803: INFO: update-demo-kitten-z6th5 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:42:54.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5379" for this suite. • [SLOW TEST:35.690 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":86,"skipped":1388,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:42:54.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mpxpc in namespace proxy-5482 I0524 21:42:54.946359 6 runners.go:189] Created replication controller with name: proxy-service-mpxpc, namespace: proxy-5482, replica count: 1 I0524 21:42:55.996753 6 runners.go:189] proxy-service-mpxpc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:42:56.996992 6 runners.go:189] proxy-service-mpxpc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:42:57.997295 6 runners.go:189] proxy-service-mpxpc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:42:58.997519 6 runners.go:189] proxy-service-mpxpc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 21:42:59.997698 6 runners.go:189] proxy-service-mpxpc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 21:43:00.997883 6 runners.go:189] proxy-service-mpxpc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 21:43:01.998082 6 runners.go:189] proxy-service-mpxpc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 21:43:02.007: INFO: setup took 7.117347973s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 24 21:43:02.016: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 8.150198ms) May 24 21:43:02.034: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 26.131873ms) May 24 21:43:02.034: INFO: (0) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 26.092747ms) May 24 21:43:02.034: INFO: (0) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 26.621492ms) May 24 21:43:02.034: INFO: (0) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 26.565752ms) May 24 21:43:02.034: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 26.73657ms) May 24 21:43:02.034: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 26.998019ms) May 24 21:43:02.039: INFO: (0) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 30.956305ms) May 24 21:43:02.039: INFO: (0) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 31.14096ms) May 24 21:43:02.042: INFO: (0) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 34.559768ms) May 24 21:43:02.042: INFO: (0) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: ... (200; 10.781812ms) May 24 21:43:02.068: INFO: (1) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 10.858096ms) May 24 21:43:02.068: INFO: (1) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 10.944431ms) May 24 21:43:02.068: INFO: (1) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 10.989161ms) May 24 21:43:02.068: INFO: (1) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 11.304808ms) May 24 21:43:02.068: INFO: (1) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 11.304071ms) May 24 21:43:02.071: INFO: (1) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 14.177911ms) May 24 21:43:02.071: INFO: (1) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 14.183403ms) May 24 21:43:02.071: INFO: (1) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 14.262449ms) May 24 21:43:02.074: INFO: (2) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 2.89693ms) May 24 21:43:02.074: INFO: (2) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 2.776051ms) May 24 21:43:02.077: INFO: (2) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.39242ms) May 24 21:43:02.077: INFO: (2) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 5.542661ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 6.011339ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 6.139009ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 6.118235ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 6.379739ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 6.660381ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.577644ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: ... (200; 6.73628ms) May 24 21:43:02.078: INFO: (2) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 6.755304ms) May 24 21:43:02.082: INFO: (3) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 3.015163ms) May 24 21:43:02.082: INFO: (3) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: ... (200; 3.586787ms) May 24 21:43:02.082: INFO: (3) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 3.609553ms) May 24 21:43:02.082: INFO: (3) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 3.660923ms) May 24 21:43:02.082: INFO: (3) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 3.661181ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 4.327106ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 4.518519ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 4.432684ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 4.679529ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 4.643441ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 4.587095ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.599147ms) May 24 21:43:02.083: INFO: (3) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 4.775142ms) May 24 21:43:02.084: INFO: (3) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 4.95583ms) May 24 21:43:02.087: INFO: (4) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 3.31295ms) May 24 21:43:02.087: INFO: (4) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test<... (200; 6.540916ms) May 24 21:43:02.090: INFO: (4) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.604117ms) May 24 21:43:02.090: INFO: (4) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 6.580101ms) May 24 21:43:02.090: INFO: (4) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 6.572622ms) May 24 21:43:02.090: INFO: (4) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 6.583761ms) May 24 21:43:02.090: INFO: (4) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 6.540791ms) May 24 21:43:02.090: INFO: (4) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 6.524404ms) May 24 21:43:02.091: INFO: (4) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 6.915807ms) May 24 21:43:02.092: INFO: (4) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 8.133993ms) May 24 21:43:02.092: INFO: (4) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 8.028152ms) May 24 21:43:02.095: INFO: (5) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 3.090773ms) May 24 21:43:02.096: INFO: (5) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.070881ms) May 24 21:43:02.097: INFO: (5) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 4.582388ms) May 24 21:43:02.097: INFO: (5) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 4.76422ms) May 24 21:43:02.097: INFO: (5) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.896481ms) May 24 21:43:02.097: INFO: (5) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 4.996018ms) May 24 21:43:02.097: INFO: (5) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: ... (200; 6.79052ms) May 24 21:43:02.099: INFO: (5) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.826459ms) May 24 21:43:02.103: INFO: (6) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.470027ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 4.709933ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.77679ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 4.896047ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 4.96923ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 5.17074ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 5.257402ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.345579ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 5.267949ms) May 24 21:43:02.104: INFO: (6) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test (200; 6.237619ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.814312ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 7.025505ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 7.070622ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 6.964343ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test<... (200; 7.023017ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 7.059959ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 7.114749ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 7.021274ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 7.075887ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 7.115273ms) May 24 21:43:02.121: INFO: (7) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 7.195749ms) May 24 21:43:02.125: INFO: (8) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 3.958328ms) May 24 21:43:02.125: INFO: (8) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.024904ms) May 24 21:43:02.126: INFO: (8) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 4.176835ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 5.366352ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 5.400185ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 5.375704ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 5.420657ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 5.460329ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 5.401261ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 5.448536ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 5.502837ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 5.573727ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 5.656683ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.862928ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.883447ms) May 24 21:43:02.127: INFO: (8) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test<... (200; 3.50922ms) May 24 21:43:02.131: INFO: (9) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 3.76475ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.295377ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 5.675702ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 5.766163ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 5.859575ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.742417ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 5.849631ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 5.840435ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.808202ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 5.824596ms) May 24 21:43:02.133: INFO: (9) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test (200; 5.922265ms) May 24 21:43:02.136: INFO: (10) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 2.673947ms) May 24 21:43:02.138: INFO: (10) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.285186ms) May 24 21:43:02.138: INFO: (10) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 4.307773ms) May 24 21:43:02.138: INFO: (10) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 4.182891ms) May 24 21:43:02.138: INFO: (10) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 3.976983ms) May 24 21:43:02.138: INFO: (10) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 4.493227ms) May 24 21:43:02.138: INFO: (10) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 3.983137ms) May 24 21:43:02.139: INFO: (10) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.524762ms) May 24 21:43:02.139: INFO: (10) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 5.172803ms) May 24 21:43:02.139: INFO: (10) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.484265ms) May 24 21:43:02.140: INFO: (10) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 5.470347ms) May 24 21:43:02.140: INFO: (10) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 5.528719ms) May 24 21:43:02.140: INFO: (10) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.16141ms) May 24 21:43:02.140: INFO: (10) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 6.092591ms) May 24 21:43:02.140: INFO: (10) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test<... (200; 9.421962ms) May 24 21:43:02.150: INFO: (11) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 9.538782ms) May 24 21:43:02.150: INFO: (11) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: ... (200; 9.559393ms) May 24 21:43:02.150: INFO: (11) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 9.710982ms) May 24 21:43:02.150: INFO: (11) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 10.143692ms) May 24 21:43:02.150: INFO: (11) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 10.143931ms) May 24 21:43:02.154: INFO: (12) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 3.827691ms) May 24 21:43:02.154: INFO: (12) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 3.804762ms) May 24 21:43:02.154: INFO: (12) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 3.861919ms) May 24 21:43:02.155: INFO: (12) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.732513ms) May 24 21:43:02.155: INFO: (12) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 4.812072ms) May 24 21:43:02.155: INFO: (12) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.757548ms) May 24 21:43:02.155: INFO: (12) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test<... (200; 4.838656ms) May 24 21:43:02.155: INFO: (12) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.731929ms) May 24 21:43:02.156: INFO: (12) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 5.354111ms) May 24 21:43:02.156: INFO: (12) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.304669ms) May 24 21:43:02.157: INFO: (12) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 6.230379ms) May 24 21:43:02.176: INFO: (12) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 25.743065ms) May 24 21:43:02.176: INFO: (12) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 25.754684ms) May 24 21:43:02.176: INFO: (12) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 25.968466ms) May 24 21:43:02.177: INFO: (12) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 25.910248ms) May 24 21:43:02.180: INFO: (13) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 2.975ms) May 24 21:43:02.181: INFO: (13) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 4.303204ms) May 24 21:43:02.181: INFO: (13) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.260657ms) May 24 21:43:02.181: INFO: (13) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 4.588561ms) May 24 21:43:02.181: INFO: (13) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.667482ms) May 24 21:43:02.182: INFO: (13) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.684539ms) May 24 21:43:02.182: INFO: (13) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 5.060813ms) May 24 21:43:02.183: INFO: (13) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 6.611735ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.805657ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 7.204834ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 7.239205ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 7.583385ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 7.370268ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 7.484089ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 7.550995ms) May 24 21:43:02.184: INFO: (13) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test (200; 5.155429ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 5.458795ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.53684ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 5.621499ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 5.770432ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.909556ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 5.933204ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 5.85731ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 5.904613ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.99757ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 6.121005ms) May 24 21:43:02.190: INFO: (14) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 6.067211ms) May 24 21:43:02.191: INFO: (14) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 6.119903ms) May 24 21:43:02.191: INFO: (14) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 6.378164ms) May 24 21:43:02.194: INFO: (15) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 3.053704ms) May 24 21:43:02.194: INFO: (15) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 3.025422ms) May 24 21:43:02.194: INFO: (15) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 2.959352ms) May 24 21:43:02.194: INFO: (15) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test (200; 4.144103ms) May 24 21:43:02.195: INFO: (15) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 4.162958ms) May 24 21:43:02.196: INFO: (15) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 5.176427ms) May 24 21:43:02.196: INFO: (15) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.312424ms) May 24 21:43:02.196: INFO: (15) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 5.285211ms) May 24 21:43:02.197: INFO: (15) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.340364ms) May 24 21:43:02.197: INFO: (15) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 6.414025ms) May 24 21:43:02.197: INFO: (15) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 6.456785ms) May 24 21:43:02.197: INFO: (15) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 6.420847ms) May 24 21:43:02.197: INFO: (15) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 6.499094ms) May 24 21:43:02.197: INFO: (15) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 6.453422ms) May 24 21:43:02.200: INFO: (16) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 2.611926ms) May 24 21:43:02.200: INFO: (16) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 2.701434ms) May 24 21:43:02.200: INFO: (16) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 2.854411ms) May 24 21:43:02.200: INFO: (16) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 2.908252ms) May 24 21:43:02.202: INFO: (16) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test (200; 5.167312ms) May 24 21:43:02.203: INFO: (16) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 5.841747ms) May 24 21:43:02.203: INFO: (16) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 5.962946ms) May 24 21:43:02.203: INFO: (16) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.970031ms) May 24 21:43:02.204: INFO: (16) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 5.993458ms) May 24 21:43:02.206: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 2.127675ms) May 24 21:43:02.208: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.323444ms) May 24 21:43:02.208: INFO: (17) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 4.434056ms) May 24 21:43:02.208: INFO: (17) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.443266ms) May 24 21:43:02.208: INFO: (17) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: ... (200; 5.393943ms) May 24 21:43:02.209: INFO: (17) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 5.727367ms) May 24 21:43:02.209: INFO: (17) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 5.790841ms) May 24 21:43:02.209: INFO: (17) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 5.739461ms) May 24 21:43:02.209: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 5.793113ms) May 24 21:43:02.209: INFO: (17) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 5.730309ms) May 24 21:43:02.209: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 5.773264ms) May 24 21:43:02.210: INFO: (17) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 6.389716ms) May 24 21:43:02.213: INFO: (18) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 2.690295ms) May 24 21:43:02.213: INFO: (18) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:460/proxy/: tls baz (200; 2.789834ms) May 24 21:43:02.213: INFO: (18) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 2.68917ms) May 24 21:43:02.214: INFO: (18) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: ... (200; 4.25646ms) May 24 21:43:02.215: INFO: (18) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.28277ms) May 24 21:43:02.215: INFO: (18) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.467536ms) May 24 21:43:02.215: INFO: (18) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 4.584094ms) May 24 21:43:02.215: INFO: (18) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv/proxy/: test (200; 4.792675ms) May 24 21:43:02.215: INFO: (18) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 4.935245ms) May 24 21:43:02.216: INFO: (18) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 5.941212ms) May 24 21:43:02.216: INFO: (18) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 5.91094ms) May 24 21:43:02.216: INFO: (18) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname1/proxy/: foo (200; 5.966139ms) May 24 21:43:02.216: INFO: (18) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 5.991268ms) May 24 21:43:02.216: INFO: (18) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.995084ms) May 24 21:43:02.217: INFO: (18) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 6.447276ms) May 24 21:43:02.220: INFO: (19) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:162/proxy/: bar (200; 2.816683ms) May 24 21:43:02.220: INFO: (19) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:1080/proxy/: ... (200; 3.158237ms) May 24 21:43:02.220: INFO: (19) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:1080/proxy/: test<... (200; 3.170567ms) May 24 21:43:02.220: INFO: (19) /api/v1/namespaces/proxy-5482/pods/proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 3.221132ms) May 24 21:43:02.221: INFO: (19) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-mpxpc-25nvv:160/proxy/: foo (200; 4.597127ms) May 24 21:43:02.222: INFO: (19) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:443/proxy/: test (200; 5.300238ms) May 24 21:43:02.222: INFO: (19) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname2/proxy/: bar (200; 5.270949ms) May 24 21:43:02.222: INFO: (19) /api/v1/namespaces/proxy-5482/services/proxy-service-mpxpc:portname1/proxy/: foo (200; 5.437608ms) May 24 21:43:02.222: INFO: (19) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-mpxpc-25nvv:462/proxy/: tls qux (200; 5.299772ms) May 24 21:43:02.222: INFO: (19) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname2/proxy/: tls qux (200; 5.433819ms) May 24 21:43:02.222: INFO: (19) /api/v1/namespaces/proxy-5482/services/http:proxy-service-mpxpc:portname2/proxy/: bar (200; 5.578634ms) May 24 21:43:02.223: INFO: (19) /api/v1/namespaces/proxy-5482/services/https:proxy-service-mpxpc:tlsportname1/proxy/: tls baz (200; 5.711461ms) STEP: deleting ReplicationController proxy-service-mpxpc in namespace proxy-5482, will wait for the garbage collector to delete the pods May 24 21:43:02.281: INFO: Deleting ReplicationController proxy-service-mpxpc took: 6.809151ms May 24 21:43:02.382: INFO: Terminating ReplicationController proxy-service-mpxpc pods took: 100.168532ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:43:09.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5482" for this suite. • [SLOW TEST:14.480 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":87,"skipped":1400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:43:09.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7877 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 21:43:09.343: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 21:43:37.469: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.131:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7877 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:43:37.469: INFO: >>> kubeConfig: /root/.kube/config I0524 21:43:37.510044 6 log.go:172] (0xc0027cbb80) (0xc001a84be0) Create stream I0524 21:43:37.510081 6 log.go:172] (0xc0027cbb80) (0xc001a84be0) Stream added, broadcasting: 1 I0524 21:43:37.512395 6 log.go:172] (0xc0027cbb80) Reply frame received for 1 I0524 21:43:37.512417 6 log.go:172] (0xc0027cbb80) (0xc0015388c0) Create stream I0524 21:43:37.512426 6 log.go:172] (0xc0027cbb80) (0xc0015388c0) Stream added, broadcasting: 3 I0524 21:43:37.513740 6 log.go:172] (0xc0027cbb80) Reply frame received for 3 I0524 21:43:37.513770 6 log.go:172] (0xc0027cbb80) (0xc001538b40) Create stream I0524 21:43:37.513782 6 log.go:172] (0xc0027cbb80) (0xc001538b40) Stream added, broadcasting: 5 I0524 21:43:37.514773 6 log.go:172] (0xc0027cbb80) Reply frame received for 5 I0524 21:43:37.765950 6 log.go:172] (0xc0027cbb80) Data frame received for 3 I0524 21:43:37.765976 6 log.go:172] (0xc0015388c0) (3) Data frame handling I0524 21:43:37.765988 6 log.go:172] (0xc0015388c0) (3) Data frame sent I0524 21:43:37.766000 6 log.go:172] (0xc0027cbb80) Data frame received for 3 I0524 21:43:37.766008 6 log.go:172] (0xc0015388c0) (3) Data frame handling I0524 21:43:37.766019 6 log.go:172] (0xc0027cbb80) Data frame received for 5 I0524 21:43:37.766027 6 log.go:172] (0xc001538b40) (5) Data frame handling I0524 21:43:37.767571 6 log.go:172] (0xc0027cbb80) Data frame received for 1 I0524 21:43:37.767613 6 log.go:172] (0xc001a84be0) (1) Data frame handling I0524 21:43:37.767630 6 log.go:172] (0xc001a84be0) (1) Data frame sent I0524 21:43:37.767641 6 log.go:172] (0xc0027cbb80) (0xc001a84be0) Stream removed, broadcasting: 1 I0524 21:43:37.767723 6 log.go:172] (0xc0027cbb80) (0xc001a84be0) Stream removed, broadcasting: 1 I0524 21:43:37.767738 6 log.go:172] (0xc0027cbb80) (0xc0015388c0) Stream removed, broadcasting: 3 I0524 21:43:37.767794 6 log.go:172] (0xc0027cbb80) Go away received I0524 21:43:37.767848 6 log.go:172] (0xc0027cbb80) (0xc001538b40) Stream removed, broadcasting: 5 May 24 21:43:37.767: INFO: Found all expected endpoints: [netserver-0] May 24 21:43:37.770: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.185:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7877 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:43:37.770: INFO: >>> kubeConfig: /root/.kube/config I0524 21:43:37.795471 6 log.go:172] (0xc004c60210) (0xc001e723c0) Create stream I0524 21:43:37.795493 6 log.go:172] (0xc004c60210) (0xc001e723c0) Stream added, broadcasting: 1 I0524 21:43:37.797087 6 log.go:172] (0xc004c60210) Reply frame received for 1 I0524 21:43:37.797235 6 log.go:172] (0xc004c60210) (0xc001e72460) Create stream I0524 21:43:37.797250 6 log.go:172] (0xc004c60210) (0xc001e72460) Stream added, broadcasting: 3 I0524 21:43:37.797874 6 log.go:172] (0xc004c60210) Reply frame received for 3 I0524 21:43:37.797891 6 log.go:172] (0xc004c60210) (0xc001a84d20) Create stream I0524 21:43:37.797899 6 log.go:172] (0xc004c60210) (0xc001a84d20) Stream added, broadcasting: 5 I0524 21:43:37.798547 6 log.go:172] (0xc004c60210) Reply frame received for 5 I0524 21:43:37.846496 6 log.go:172] (0xc004c60210) Data frame received for 3 I0524 21:43:37.846525 6 log.go:172] (0xc001e72460) (3) Data frame handling I0524 21:43:37.846540 6 log.go:172] (0xc001e72460) (3) Data frame sent I0524 21:43:37.846819 6 log.go:172] (0xc004c60210) Data frame received for 3 I0524 21:43:37.846846 6 log.go:172] (0xc001e72460) (3) Data frame handling I0524 21:43:37.846925 6 log.go:172] (0xc004c60210) Data frame received for 5 I0524 21:43:37.846950 6 log.go:172] (0xc001a84d20) (5) Data frame handling I0524 21:43:37.848262 6 log.go:172] (0xc004c60210) Data frame received for 1 I0524 21:43:37.848277 6 log.go:172] (0xc001e723c0) (1) Data frame handling I0524 21:43:37.848285 6 log.go:172] (0xc001e723c0) (1) Data frame sent I0524 21:43:37.848416 6 log.go:172] (0xc004c60210) (0xc001e723c0) Stream removed, broadcasting: 1 I0524 21:43:37.848451 6 log.go:172] (0xc004c60210) Go away received I0524 21:43:37.848827 6 log.go:172] (0xc004c60210) (0xc001e723c0) Stream removed, broadcasting: 1 I0524 21:43:37.848844 6 log.go:172] (0xc004c60210) (0xc001e72460) Stream removed, broadcasting: 3 I0524 21:43:37.848852 6 log.go:172] (0xc004c60210) (0xc001a84d20) Stream removed, broadcasting: 5 May 24 21:43:37.848: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:43:37.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7877" for this suite. • [SLOW TEST:28.616 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:43:37.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:43:38.821: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:43:41.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:43:43.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:43:46.080: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 24 21:43:46.126: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:43:46.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4384" for this suite. STEP: Destroying namespace "webhook-4384-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.303 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":89,"skipped":1464,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:43:46.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:43:50.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8089" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1464,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:43:50.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-b96w STEP: Creating a pod to test atomic-volume-subpath May 24 21:43:50.668: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-b96w" in namespace "subpath-5327" to be "success or failure" May 24 21:43:50.672: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003282ms May 24 21:43:52.676: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008427003s May 24 21:43:54.681: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 4.013276313s May 24 21:43:56.686: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 6.01784832s May 24 21:43:58.690: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 8.022012064s May 24 21:44:00.695: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 10.026692876s May 24 21:44:02.700: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 12.031603369s May 24 21:44:04.705: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 14.036517761s May 24 21:44:06.709: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 16.04082699s May 24 21:44:08.714: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 18.045653633s May 24 21:44:10.718: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 20.050041442s May 24 21:44:12.722: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Running", Reason="", readiness=true. Elapsed: 22.054383679s May 24 21:44:14.727: INFO: Pod "pod-subpath-test-projected-b96w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058647192s STEP: Saw pod success May 24 21:44:14.727: INFO: Pod "pod-subpath-test-projected-b96w" satisfied condition "success or failure" May 24 21:44:14.730: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-b96w container test-container-subpath-projected-b96w: STEP: delete the pod May 24 21:44:14.778: INFO: Waiting for pod pod-subpath-test-projected-b96w to disappear May 24 21:44:14.787: INFO: Pod pod-subpath-test-projected-b96w no longer exists STEP: Deleting pod pod-subpath-test-projected-b96w May 24 21:44:14.787: INFO: Deleting pod "pod-subpath-test-projected-b96w" in namespace "subpath-5327" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:14.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5327" for this suite. • [SLOW TEST:24.456 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":91,"skipped":1469,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:14.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 24 21:44:14.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1829 -- logs-generator --log-lines-total 100 --run-duration 20s' May 24 21:44:14.952: INFO: stderr: "" May 24 21:44:14.952: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 24 21:44:14.952: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 24 21:44:14.952: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1829" to be "running and ready, or succeeded" May 24 21:44:14.962: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.240327ms May 24 21:44:16.967: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015561046s May 24 21:44:18.973: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.021509821s May 24 21:44:18.973: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 24 21:44:18.973: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 24 21:44:18.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1829' May 24 21:44:19.090: INFO: stderr: "" May 24 21:44:19.090: INFO: stdout: "I0524 21:44:17.293672 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/xcb 227\nI0524 21:44:17.493881 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wnv 342\nI0524 21:44:17.693844 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/vdc 531\nI0524 21:44:17.893854 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/tncd 498\nI0524 21:44:18.093908 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/9q6 467\nI0524 21:44:18.293913 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/kdqw 380\nI0524 21:44:18.493841 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/j6f7 272\nI0524 21:44:18.693886 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/j55m 411\nI0524 21:44:18.893925 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/mzd7 291\n" STEP: limiting log lines May 24 21:44:19.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1829 --tail=1' May 24 21:44:19.207: INFO: stderr: "" May 24 21:44:19.207: INFO: stdout: "I0524 21:44:19.093818 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/t7f 491\n" May 24 21:44:19.207: INFO: got output "I0524 21:44:19.093818 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/t7f 491\n" STEP: limiting log bytes May 24 21:44:19.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1829 --limit-bytes=1' May 24 21:44:19.313: INFO: stderr: "" May 24 21:44:19.313: INFO: stdout: "I" May 24 21:44:19.313: INFO: got output "I" STEP: exposing timestamps May 24 21:44:19.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1829 --tail=1 --timestamps' May 24 21:44:19.423: INFO: stderr: "" May 24 21:44:19.424: INFO: stdout: "2020-05-24T21:44:19.293975757Z I0524 21:44:19.293825 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/fkq6 412\n" May 24 21:44:19.424: INFO: got output "2020-05-24T21:44:19.293975757Z I0524 21:44:19.293825 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/fkq6 412\n" STEP: restricting to a time range May 24 21:44:21.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1829 --since=1s' May 24 21:44:22.034: INFO: stderr: "" May 24 21:44:22.034: INFO: stdout: "I0524 21:44:21.093915 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/vhk 410\nI0524 21:44:21.293901 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/f4p 459\nI0524 21:44:21.493842 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/t4p8 454\nI0524 21:44:21.693838 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/ml4 490\nI0524 21:44:21.893912 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/wm9 504\n" May 24 21:44:22.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1829 --since=24h' May 24 21:44:22.147: INFO: stderr: "" May 24 21:44:22.147: INFO: stdout: "I0524 21:44:17.293672 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/xcb 227\nI0524 21:44:17.493881 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wnv 342\nI0524 21:44:17.693844 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/vdc 531\nI0524 21:44:17.893854 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/tncd 498\nI0524 21:44:18.093908 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/9q6 467\nI0524 21:44:18.293913 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/kdqw 380\nI0524 21:44:18.493841 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/j6f7 272\nI0524 21:44:18.693886 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/j55m 411\nI0524 21:44:18.893925 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/mzd7 291\nI0524 21:44:19.093818 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/t7f 491\nI0524 21:44:19.293825 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/fkq6 412\nI0524 21:44:19.493890 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/28z7 236\nI0524 21:44:19.693872 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/gvx 218\nI0524 21:44:19.893902 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/b58 223\nI0524 21:44:20.093878 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/6xlf 444\nI0524 21:44:20.293891 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/4bg9 590\nI0524 21:44:20.493862 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/2cd 234\nI0524 21:44:20.693873 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/nlr 538\nI0524 21:44:20.893857 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/l6kx 209\nI0524 21:44:21.093915 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/vhk 410\nI0524 21:44:21.293901 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/f4p 459\nI0524 21:44:21.493842 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/t4p8 454\nI0524 21:44:21.693838 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/ml4 490\nI0524 21:44:21.893912 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/wm9 504\nI0524 21:44:22.093887 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/kzv 387\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 24 21:44:22.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1829' May 24 21:44:29.515: INFO: stderr: "" May 24 21:44:29.515: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:29.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1829" for this suite. • [SLOW TEST:14.726 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":92,"skipped":1483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:29.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 21:44:29.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2647' May 24 21:44:29.699: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 21:44:29.699: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 24 21:44:31.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2647' May 24 21:44:31.851: INFO: stderr: "" May 24 21:44:31.852: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:31.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2647" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":93,"skipped":1509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:31.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-e1785af0-8920-446e-b4e0-d98e475767a6 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:32.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5425" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":94,"skipped":1540,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:32.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 24 21:44:36.784: INFO: Successfully updated pod "pod-update-activedeadlineseconds-789c220c-a837-4da2-b12a-ecde8e1a1ffb" May 24 21:44:36.785: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-789c220c-a837-4da2-b12a-ecde8e1a1ffb" in namespace "pods-7351" to be "terminated due to deadline exceeded" May 24 21:44:36.799: INFO: Pod "pod-update-activedeadlineseconds-789c220c-a837-4da2-b12a-ecde8e1a1ffb": Phase="Running", Reason="", readiness=true. Elapsed: 14.533458ms May 24 21:44:38.803: INFO: Pod "pod-update-activedeadlineseconds-789c220c-a837-4da2-b12a-ecde8e1a1ffb": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018547802s May 24 21:44:38.803: INFO: Pod "pod-update-activedeadlineseconds-789c220c-a837-4da2-b12a-ecde8e1a1ffb" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:38.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7351" for this suite. • [SLOW TEST:6.758 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1549,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:38.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:44:39.713: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:44:41.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953479, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953479, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953479, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953479, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:44:44.762: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:44.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4262" for this suite. STEP: Destroying namespace "webhook-4262-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.140 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":96,"skipped":1571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:44.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 24 21:44:45.008: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 21:44:45.021: INFO: Waiting for terminating namespaces to be deleted... May 24 21:44:45.023: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 24 21:44:45.042: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:44:45.043: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:44:45.043: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:44:45.043: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:44:45.043: INFO: sample-webhook-deployment-5f65f8c764-d9gmp from webhook-4262 started at 2020-05-24 21:44:39 +0000 UTC (1 container statuses recorded) May 24 21:44:45.043: INFO: Container sample-webhook ready: true, restart count 0 May 24 21:44:45.043: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 24 21:44:45.049: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 24 21:44:45.049: INFO: Container kube-hunter ready: false, restart count 0 May 24 21:44:45.049: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:44:45.049: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:44:45.049: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 24 21:44:45.049: INFO: Container kube-bench ready: false, restart count 0 May 24 21:44:45.049: INFO: e2e-test-httpd-deployment-594dddd44f-q2qhf from kubectl-2647 started at 2020-05-24 21:44:29 +0000 UTC (1 container statuses recorded) May 24 21:44:45.049: INFO: Container e2e-test-httpd-deployment ready: true, restart count 0 May 24 21:44:45.049: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:44:45.049: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-06f61931-60d1-46b6-bcf2-054809bb39b0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-06f61931-60d1-46b6-bcf2-054809bb39b0 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-06f61931-60d1-46b6-bcf2-054809bb39b0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:53.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6871" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.342 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":97,"skipped":1600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:53.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4450.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4450.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4450.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4450.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4450.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4450.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 21:44:59.445: INFO: DNS probes using dns-4450/dns-test-0ca738f1-e9a1-48a4-9bd0-f13a19e2ae8d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:44:59.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4450" for this suite. • [SLOW TEST:6.291 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":98,"skipped":1638,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:44:59.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:45:00.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081" in namespace "projected-6131" to be "success or failure" May 24 21:45:00.163: INFO: Pod "downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081": Phase="Pending", Reason="", readiness=false. Elapsed: 44.435106ms May 24 21:45:02.166: INFO: Pod "downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048036056s May 24 21:45:04.171: INFO: Pod "downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052302355s STEP: Saw pod success May 24 21:45:04.171: INFO: Pod "downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081" satisfied condition "success or failure" May 24 21:45:04.174: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081 container client-container: STEP: delete the pod May 24 21:45:04.287: INFO: Waiting for pod downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081 to disappear May 24 21:45:04.290: INFO: Pod downwardapi-volume-935b97b1-681d-4739-95dc-49b1121f9081 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:04.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6131" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1638,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:04.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 24 21:45:08.967: INFO: Successfully updated pod "labelsupdatef0b79ee8-a0f6-4ddd-a1cc-adf809294471" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:11.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3602" for this suite. • [SLOW TEST:6.712 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:11.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e5d98f3c-b224-4b80-9821-4a775fbf5cd5 STEP: Creating a pod to test consume secrets May 24 21:45:11.077: INFO: Waiting up to 5m0s for pod "pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81" in namespace "secrets-6412" to be "success or failure" May 24 21:45:11.082: INFO: Pod "pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.672652ms May 24 21:45:13.191: INFO: Pod "pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113136067s May 24 21:45:15.194: INFO: Pod "pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116331622s STEP: Saw pod success May 24 21:45:15.194: INFO: Pod "pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81" satisfied condition "success or failure" May 24 21:45:15.196: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81 container secret-env-test: STEP: delete the pod May 24 21:45:15.243: INFO: Waiting for pod pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81 to disappear May 24 21:45:15.267: INFO: Pod pod-secrets-2bd091d5-2d76-4226-8d21-b2cb88d8cc81 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:15.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6412" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1682,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:15.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:15.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3293" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":102,"skipped":1687,"failed":0} S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:15.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 24 21:45:25.605: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:25.605: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:25.646605 6 log.go:172] (0xc0017068f0) (0xc001a84dc0) Create stream I0524 21:45:25.646645 6 log.go:172] (0xc0017068f0) (0xc001a84dc0) Stream added, broadcasting: 1 I0524 21:45:25.648767 6 log.go:172] (0xc0017068f0) Reply frame received for 1 I0524 21:45:25.648813 6 log.go:172] (0xc0017068f0) (0xc0028bc000) Create stream I0524 21:45:25.648830 6 log.go:172] (0xc0017068f0) (0xc0028bc000) Stream added, broadcasting: 3 I0524 21:45:25.649918 6 log.go:172] (0xc0017068f0) Reply frame received for 3 I0524 21:45:25.649958 6 log.go:172] (0xc0017068f0) (0xc0028bc0a0) Create stream I0524 21:45:25.649973 6 log.go:172] (0xc0017068f0) (0xc0028bc0a0) Stream added, broadcasting: 5 I0524 21:45:25.650757 6 log.go:172] (0xc0017068f0) Reply frame received for 5 I0524 21:45:25.719033 6 log.go:172] (0xc0017068f0) Data frame received for 3 I0524 21:45:25.719097 6 log.go:172] (0xc0028bc000) (3) Data frame handling I0524 21:45:25.719121 6 log.go:172] (0xc0028bc000) (3) Data frame sent I0524 21:45:25.719133 6 log.go:172] (0xc0017068f0) Data frame received for 3 I0524 21:45:25.719154 6 log.go:172] (0xc0028bc000) (3) Data frame handling I0524 21:45:25.719186 6 log.go:172] (0xc0017068f0) Data frame received for 5 I0524 21:45:25.719219 6 log.go:172] (0xc0028bc0a0) (5) Data frame handling I0524 21:45:25.720502 6 log.go:172] (0xc0017068f0) Data frame received for 1 I0524 21:45:25.720521 6 log.go:172] (0xc001a84dc0) (1) Data frame handling I0524 21:45:25.720557 6 log.go:172] (0xc001a84dc0) (1) Data frame sent I0524 21:45:25.720583 6 log.go:172] (0xc0017068f0) (0xc001a84dc0) Stream removed, broadcasting: 1 I0524 21:45:25.720637 6 log.go:172] (0xc0017068f0) Go away received I0524 21:45:25.720668 6 log.go:172] (0xc0017068f0) (0xc001a84dc0) Stream removed, broadcasting: 1 I0524 21:45:25.720692 6 log.go:172] (0xc0017068f0) (0xc0028bc000) Stream removed, broadcasting: 3 I0524 21:45:25.720705 6 log.go:172] (0xc0017068f0) (0xc0028bc0a0) Stream removed, broadcasting: 5 May 24 21:45:25.720: INFO: Exec stderr: "" May 24 21:45:25.720: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:25.720: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:25.748616 6 log.go:172] (0xc001707080) (0xc001a84fa0) Create stream I0524 21:45:25.748650 6 log.go:172] (0xc001707080) (0xc001a84fa0) Stream added, broadcasting: 1 I0524 21:45:25.750763 6 log.go:172] (0xc001707080) Reply frame received for 1 I0524 21:45:25.750802 6 log.go:172] (0xc001707080) (0xc0023a0f00) Create stream I0524 21:45:25.750822 6 log.go:172] (0xc001707080) (0xc0023a0f00) Stream added, broadcasting: 3 I0524 21:45:25.751804 6 log.go:172] (0xc001707080) Reply frame received for 3 I0524 21:45:25.751835 6 log.go:172] (0xc001707080) (0xc0028bc140) Create stream I0524 21:45:25.751846 6 log.go:172] (0xc001707080) (0xc0028bc140) Stream added, broadcasting: 5 I0524 21:45:25.752705 6 log.go:172] (0xc001707080) Reply frame received for 5 I0524 21:45:25.809942 6 log.go:172] (0xc001707080) Data frame received for 5 I0524 21:45:25.809994 6 log.go:172] (0xc0028bc140) (5) Data frame handling I0524 21:45:25.810022 6 log.go:172] (0xc001707080) Data frame received for 3 I0524 21:45:25.810032 6 log.go:172] (0xc0023a0f00) (3) Data frame handling I0524 21:45:25.810048 6 log.go:172] (0xc0023a0f00) (3) Data frame sent I0524 21:45:25.810074 6 log.go:172] (0xc001707080) Data frame received for 3 I0524 21:45:25.810106 6 log.go:172] (0xc0023a0f00) (3) Data frame handling I0524 21:45:25.811633 6 log.go:172] (0xc001707080) Data frame received for 1 I0524 21:45:25.811666 6 log.go:172] (0xc001a84fa0) (1) Data frame handling I0524 21:45:25.811691 6 log.go:172] (0xc001a84fa0) (1) Data frame sent I0524 21:45:25.811770 6 log.go:172] (0xc001707080) (0xc001a84fa0) Stream removed, broadcasting: 1 I0524 21:45:25.811822 6 log.go:172] (0xc001707080) Go away received I0524 21:45:25.811966 6 log.go:172] (0xc001707080) (0xc001a84fa0) Stream removed, broadcasting: 1 I0524 21:45:25.811998 6 log.go:172] (0xc001707080) (0xc0023a0f00) Stream removed, broadcasting: 3 I0524 21:45:25.812020 6 log.go:172] (0xc001707080) (0xc0028bc140) Stream removed, broadcasting: 5 May 24 21:45:25.812: INFO: Exec stderr: "" May 24 21:45:25.812: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:25.812: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:25.842832 6 log.go:172] (0xc003cd8370) (0xc0028bc500) Create stream I0524 21:45:25.842868 6 log.go:172] (0xc003cd8370) (0xc0028bc500) Stream added, broadcasting: 1 I0524 21:45:25.844911 6 log.go:172] (0xc003cd8370) Reply frame received for 1 I0524 21:45:25.844951 6 log.go:172] (0xc003cd8370) (0xc0028bc5a0) Create stream I0524 21:45:25.844965 6 log.go:172] (0xc003cd8370) (0xc0028bc5a0) Stream added, broadcasting: 3 I0524 21:45:25.846786 6 log.go:172] (0xc003cd8370) Reply frame received for 3 I0524 21:45:25.846815 6 log.go:172] (0xc003cd8370) (0xc0023a0fa0) Create stream I0524 21:45:25.846825 6 log.go:172] (0xc003cd8370) (0xc0023a0fa0) Stream added, broadcasting: 5 I0524 21:45:25.848040 6 log.go:172] (0xc003cd8370) Reply frame received for 5 I0524 21:45:25.921741 6 log.go:172] (0xc003cd8370) Data frame received for 5 I0524 21:45:25.921802 6 log.go:172] (0xc0023a0fa0) (5) Data frame handling I0524 21:45:25.921833 6 log.go:172] (0xc003cd8370) Data frame received for 3 I0524 21:45:25.921849 6 log.go:172] (0xc0028bc5a0) (3) Data frame handling I0524 21:45:25.921863 6 log.go:172] (0xc0028bc5a0) (3) Data frame sent I0524 21:45:25.921875 6 log.go:172] (0xc003cd8370) Data frame received for 3 I0524 21:45:25.921890 6 log.go:172] (0xc0028bc5a0) (3) Data frame handling I0524 21:45:25.923212 6 log.go:172] (0xc003cd8370) Data frame received for 1 I0524 21:45:25.923226 6 log.go:172] (0xc0028bc500) (1) Data frame handling I0524 21:45:25.923248 6 log.go:172] (0xc0028bc500) (1) Data frame sent I0524 21:45:25.923387 6 log.go:172] (0xc003cd8370) (0xc0028bc500) Stream removed, broadcasting: 1 I0524 21:45:25.923497 6 log.go:172] (0xc003cd8370) (0xc0028bc500) Stream removed, broadcasting: 1 I0524 21:45:25.923514 6 log.go:172] (0xc003cd8370) (0xc0028bc5a0) Stream removed, broadcasting: 3 I0524 21:45:25.923554 6 log.go:172] (0xc003cd8370) Go away received I0524 21:45:25.923659 6 log.go:172] (0xc003cd8370) (0xc0023a0fa0) Stream removed, broadcasting: 5 May 24 21:45:25.923: INFO: Exec stderr: "" May 24 21:45:25.923: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:25.923: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:25.954969 6 log.go:172] (0xc0027cbe40) (0xc0023a14a0) Create stream I0524 21:45:25.955009 6 log.go:172] (0xc0027cbe40) (0xc0023a14a0) Stream added, broadcasting: 1 I0524 21:45:25.956895 6 log.go:172] (0xc0027cbe40) Reply frame received for 1 I0524 21:45:25.956947 6 log.go:172] (0xc0027cbe40) (0xc0023a1540) Create stream I0524 21:45:25.956969 6 log.go:172] (0xc0027cbe40) (0xc0023a1540) Stream added, broadcasting: 3 I0524 21:45:25.958140 6 log.go:172] (0xc0027cbe40) Reply frame received for 3 I0524 21:45:25.958177 6 log.go:172] (0xc0027cbe40) (0xc0023a1680) Create stream I0524 21:45:25.958188 6 log.go:172] (0xc0027cbe40) (0xc0023a1680) Stream added, broadcasting: 5 I0524 21:45:25.959092 6 log.go:172] (0xc0027cbe40) Reply frame received for 5 I0524 21:45:26.014129 6 log.go:172] (0xc0027cbe40) Data frame received for 3 I0524 21:45:26.014173 6 log.go:172] (0xc0027cbe40) Data frame received for 5 I0524 21:45:26.014232 6 log.go:172] (0xc0023a1680) (5) Data frame handling I0524 21:45:26.014293 6 log.go:172] (0xc0023a1540) (3) Data frame handling I0524 21:45:26.014342 6 log.go:172] (0xc0023a1540) (3) Data frame sent I0524 21:45:26.014367 6 log.go:172] (0xc0027cbe40) Data frame received for 3 I0524 21:45:26.014383 6 log.go:172] (0xc0023a1540) (3) Data frame handling I0524 21:45:26.016047 6 log.go:172] (0xc0027cbe40) Data frame received for 1 I0524 21:45:26.016081 6 log.go:172] (0xc0023a14a0) (1) Data frame handling I0524 21:45:26.016106 6 log.go:172] (0xc0023a14a0) (1) Data frame sent I0524 21:45:26.016235 6 log.go:172] (0xc0027cbe40) (0xc0023a14a0) Stream removed, broadcasting: 1 I0524 21:45:26.016328 6 log.go:172] (0xc0027cbe40) (0xc0023a14a0) Stream removed, broadcasting: 1 I0524 21:45:26.016359 6 log.go:172] (0xc0027cbe40) (0xc0023a1540) Stream removed, broadcasting: 3 I0524 21:45:26.016486 6 log.go:172] (0xc0027cbe40) Go away received I0524 21:45:26.016516 6 log.go:172] (0xc0027cbe40) (0xc0023a1680) Stream removed, broadcasting: 5 May 24 21:45:26.016: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 24 21:45:26.016: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:26.016: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:26.041544 6 log.go:172] (0xc004c60630) (0xc0015392c0) Create stream I0524 21:45:26.041580 6 log.go:172] (0xc004c60630) (0xc0015392c0) Stream added, broadcasting: 1 I0524 21:45:26.043957 6 log.go:172] (0xc004c60630) Reply frame received for 1 I0524 21:45:26.043998 6 log.go:172] (0xc004c60630) (0xc0023a1900) Create stream I0524 21:45:26.044012 6 log.go:172] (0xc004c60630) (0xc0023a1900) Stream added, broadcasting: 3 I0524 21:45:26.045413 6 log.go:172] (0xc004c60630) Reply frame received for 3 I0524 21:45:26.045432 6 log.go:172] (0xc004c60630) (0xc0023a19a0) Create stream I0524 21:45:26.045439 6 log.go:172] (0xc004c60630) (0xc0023a19a0) Stream added, broadcasting: 5 I0524 21:45:26.046391 6 log.go:172] (0xc004c60630) Reply frame received for 5 I0524 21:45:26.116860 6 log.go:172] (0xc004c60630) Data frame received for 5 I0524 21:45:26.116900 6 log.go:172] (0xc0023a19a0) (5) Data frame handling I0524 21:45:26.116925 6 log.go:172] (0xc004c60630) Data frame received for 3 I0524 21:45:26.116942 6 log.go:172] (0xc0023a1900) (3) Data frame handling I0524 21:45:26.116956 6 log.go:172] (0xc0023a1900) (3) Data frame sent I0524 21:45:26.116971 6 log.go:172] (0xc004c60630) Data frame received for 3 I0524 21:45:26.116984 6 log.go:172] (0xc0023a1900) (3) Data frame handling I0524 21:45:26.118843 6 log.go:172] (0xc004c60630) Data frame received for 1 I0524 21:45:26.118893 6 log.go:172] (0xc0015392c0) (1) Data frame handling I0524 21:45:26.119003 6 log.go:172] (0xc0015392c0) (1) Data frame sent I0524 21:45:26.119102 6 log.go:172] (0xc004c60630) (0xc0015392c0) Stream removed, broadcasting: 1 I0524 21:45:26.119139 6 log.go:172] (0xc004c60630) Go away received I0524 21:45:26.119300 6 log.go:172] (0xc004c60630) (0xc0015392c0) Stream removed, broadcasting: 1 I0524 21:45:26.119340 6 log.go:172] (0xc004c60630) (0xc0023a1900) Stream removed, broadcasting: 3 I0524 21:45:26.119377 6 log.go:172] (0xc004c60630) (0xc0023a19a0) Stream removed, broadcasting: 5 May 24 21:45:26.119: INFO: Exec stderr: "" May 24 21:45:26.119: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:26.119: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:26.155387 6 log.go:172] (0xc004c60c60) (0xc001539680) Create stream I0524 21:45:26.155410 6 log.go:172] (0xc004c60c60) (0xc001539680) Stream added, broadcasting: 1 I0524 21:45:26.157850 6 log.go:172] (0xc004c60c60) Reply frame received for 1 I0524 21:45:26.157891 6 log.go:172] (0xc004c60c60) (0xc001539720) Create stream I0524 21:45:26.157906 6 log.go:172] (0xc004c60c60) (0xc001539720) Stream added, broadcasting: 3 I0524 21:45:26.159065 6 log.go:172] (0xc004c60c60) Reply frame received for 3 I0524 21:45:26.159103 6 log.go:172] (0xc004c60c60) (0xc001e72be0) Create stream I0524 21:45:26.159116 6 log.go:172] (0xc004c60c60) (0xc001e72be0) Stream added, broadcasting: 5 I0524 21:45:26.159991 6 log.go:172] (0xc004c60c60) Reply frame received for 5 I0524 21:45:26.226933 6 log.go:172] (0xc004c60c60) Data frame received for 5 I0524 21:45:26.226958 6 log.go:172] (0xc001e72be0) (5) Data frame handling I0524 21:45:26.226969 6 log.go:172] (0xc004c60c60) Data frame received for 3 I0524 21:45:26.226981 6 log.go:172] (0xc001539720) (3) Data frame handling I0524 21:45:26.226999 6 log.go:172] (0xc001539720) (3) Data frame sent I0524 21:45:26.227008 6 log.go:172] (0xc004c60c60) Data frame received for 3 I0524 21:45:26.227014 6 log.go:172] (0xc001539720) (3) Data frame handling I0524 21:45:26.228319 6 log.go:172] (0xc004c60c60) Data frame received for 1 I0524 21:45:26.228359 6 log.go:172] (0xc001539680) (1) Data frame handling I0524 21:45:26.228381 6 log.go:172] (0xc001539680) (1) Data frame sent I0524 21:45:26.228401 6 log.go:172] (0xc004c60c60) (0xc001539680) Stream removed, broadcasting: 1 I0524 21:45:26.228496 6 log.go:172] (0xc004c60c60) (0xc001539680) Stream removed, broadcasting: 1 I0524 21:45:26.228521 6 log.go:172] (0xc004c60c60) (0xc001539720) Stream removed, broadcasting: 3 I0524 21:45:26.228583 6 log.go:172] (0xc004c60c60) Go away received I0524 21:45:26.228695 6 log.go:172] (0xc004c60c60) (0xc001e72be0) Stream removed, broadcasting: 5 May 24 21:45:26.228: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 24 21:45:26.228: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:26.228: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:26.267615 6 log.go:172] (0xc0016cc6e0) (0xc001e72e60) Create stream I0524 21:45:26.267652 6 log.go:172] (0xc0016cc6e0) (0xc001e72e60) Stream added, broadcasting: 1 I0524 21:45:26.270492 6 log.go:172] (0xc0016cc6e0) Reply frame received for 1 I0524 21:45:26.270555 6 log.go:172] (0xc0016cc6e0) (0xc0015397c0) Create stream I0524 21:45:26.270573 6 log.go:172] (0xc0016cc6e0) (0xc0015397c0) Stream added, broadcasting: 3 I0524 21:45:26.272040 6 log.go:172] (0xc0016cc6e0) Reply frame received for 3 I0524 21:45:26.272070 6 log.go:172] (0xc0016cc6e0) (0xc001a85040) Create stream I0524 21:45:26.272084 6 log.go:172] (0xc0016cc6e0) (0xc001a85040) Stream added, broadcasting: 5 I0524 21:45:26.273287 6 log.go:172] (0xc0016cc6e0) Reply frame received for 5 I0524 21:45:26.335740 6 log.go:172] (0xc0016cc6e0) Data frame received for 3 I0524 21:45:26.335768 6 log.go:172] (0xc0015397c0) (3) Data frame handling I0524 21:45:26.335780 6 log.go:172] (0xc0015397c0) (3) Data frame sent I0524 21:45:26.335786 6 log.go:172] (0xc0016cc6e0) Data frame received for 3 I0524 21:45:26.335792 6 log.go:172] (0xc0015397c0) (3) Data frame handling I0524 21:45:26.335818 6 log.go:172] (0xc0016cc6e0) Data frame received for 5 I0524 21:45:26.335827 6 log.go:172] (0xc001a85040) (5) Data frame handling I0524 21:45:26.337459 6 log.go:172] (0xc0016cc6e0) Data frame received for 1 I0524 21:45:26.337477 6 log.go:172] (0xc001e72e60) (1) Data frame handling I0524 21:45:26.337487 6 log.go:172] (0xc001e72e60) (1) Data frame sent I0524 21:45:26.337497 6 log.go:172] (0xc0016cc6e0) (0xc001e72e60) Stream removed, broadcasting: 1 I0524 21:45:26.337551 6 log.go:172] (0xc0016cc6e0) (0xc001e72e60) Stream removed, broadcasting: 1 I0524 21:45:26.337561 6 log.go:172] (0xc0016cc6e0) (0xc0015397c0) Stream removed, broadcasting: 3 I0524 21:45:26.337595 6 log.go:172] (0xc0016cc6e0) Go away received I0524 21:45:26.337740 6 log.go:172] (0xc0016cc6e0) (0xc001a85040) Stream removed, broadcasting: 5 May 24 21:45:26.337: INFO: Exec stderr: "" May 24 21:45:26.337: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:26.337: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:26.366948 6 log.go:172] (0xc0016ccdc0) (0xc001e73180) Create stream I0524 21:45:26.366996 6 log.go:172] (0xc0016ccdc0) (0xc001e73180) Stream added, broadcasting: 1 I0524 21:45:26.369035 6 log.go:172] (0xc0016ccdc0) Reply frame received for 1 I0524 21:45:26.369067 6 log.go:172] (0xc0016ccdc0) (0xc001a850e0) Create stream I0524 21:45:26.369077 6 log.go:172] (0xc0016ccdc0) (0xc001a850e0) Stream added, broadcasting: 3 I0524 21:45:26.369963 6 log.go:172] (0xc0016ccdc0) Reply frame received for 3 I0524 21:45:26.369989 6 log.go:172] (0xc0016ccdc0) (0xc0023a1c20) Create stream I0524 21:45:26.369998 6 log.go:172] (0xc0016ccdc0) (0xc0023a1c20) Stream added, broadcasting: 5 I0524 21:45:26.370827 6 log.go:172] (0xc0016ccdc0) Reply frame received for 5 I0524 21:45:26.423144 6 log.go:172] (0xc0016ccdc0) Data frame received for 5 I0524 21:45:26.423173 6 log.go:172] (0xc0023a1c20) (5) Data frame handling I0524 21:45:26.423190 6 log.go:172] (0xc0016ccdc0) Data frame received for 3 I0524 21:45:26.423199 6 log.go:172] (0xc001a850e0) (3) Data frame handling I0524 21:45:26.423207 6 log.go:172] (0xc001a850e0) (3) Data frame sent I0524 21:45:26.423214 6 log.go:172] (0xc0016ccdc0) Data frame received for 3 I0524 21:45:26.423221 6 log.go:172] (0xc001a850e0) (3) Data frame handling I0524 21:45:26.424573 6 log.go:172] (0xc0016ccdc0) Data frame received for 1 I0524 21:45:26.424592 6 log.go:172] (0xc001e73180) (1) Data frame handling I0524 21:45:26.424616 6 log.go:172] (0xc001e73180) (1) Data frame sent I0524 21:45:26.424641 6 log.go:172] (0xc0016ccdc0) (0xc001e73180) Stream removed, broadcasting: 1 I0524 21:45:26.424725 6 log.go:172] (0xc0016ccdc0) (0xc001e73180) Stream removed, broadcasting: 1 I0524 21:45:26.424742 6 log.go:172] (0xc0016ccdc0) (0xc001a850e0) Stream removed, broadcasting: 3 I0524 21:45:26.424757 6 log.go:172] (0xc0016ccdc0) (0xc0023a1c20) Stream removed, broadcasting: 5 May 24 21:45:26.424: INFO: Exec stderr: "" May 24 21:45:26.424: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:26.424: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:26.424818 6 log.go:172] (0xc0016ccdc0) Go away received I0524 21:45:26.458007 6 log.go:172] (0xc001d629a0) (0xc0023a1f40) Create stream I0524 21:45:26.458032 6 log.go:172] (0xc001d629a0) (0xc0023a1f40) Stream added, broadcasting: 1 I0524 21:45:26.460609 6 log.go:172] (0xc001d629a0) Reply frame received for 1 I0524 21:45:26.460651 6 log.go:172] (0xc001d629a0) (0xc001a85360) Create stream I0524 21:45:26.460667 6 log.go:172] (0xc001d629a0) (0xc001a85360) Stream added, broadcasting: 3 I0524 21:45:26.461996 6 log.go:172] (0xc001d629a0) Reply frame received for 3 I0524 21:45:26.462016 6 log.go:172] (0xc001d629a0) (0xc001539a40) Create stream I0524 21:45:26.462023 6 log.go:172] (0xc001d629a0) (0xc001539a40) Stream added, broadcasting: 5 I0524 21:45:26.462961 6 log.go:172] (0xc001d629a0) Reply frame received for 5 I0524 21:45:26.532770 6 log.go:172] (0xc001d629a0) Data frame received for 5 I0524 21:45:26.532825 6 log.go:172] (0xc001539a40) (5) Data frame handling I0524 21:45:26.532867 6 log.go:172] (0xc001d629a0) Data frame received for 3 I0524 21:45:26.532890 6 log.go:172] (0xc001a85360) (3) Data frame handling I0524 21:45:26.532919 6 log.go:172] (0xc001a85360) (3) Data frame sent I0524 21:45:26.532941 6 log.go:172] (0xc001d629a0) Data frame received for 3 I0524 21:45:26.532959 6 log.go:172] (0xc001a85360) (3) Data frame handling I0524 21:45:26.535142 6 log.go:172] (0xc001d629a0) Data frame received for 1 I0524 21:45:26.535167 6 log.go:172] (0xc0023a1f40) (1) Data frame handling I0524 21:45:26.535198 6 log.go:172] (0xc0023a1f40) (1) Data frame sent I0524 21:45:26.535216 6 log.go:172] (0xc001d629a0) (0xc0023a1f40) Stream removed, broadcasting: 1 I0524 21:45:26.535300 6 log.go:172] (0xc001d629a0) (0xc0023a1f40) Stream removed, broadcasting: 1 I0524 21:45:26.535314 6 log.go:172] (0xc001d629a0) (0xc001a85360) Stream removed, broadcasting: 3 I0524 21:45:26.535384 6 log.go:172] (0xc001d629a0) Go away received I0524 21:45:26.535519 6 log.go:172] (0xc001d629a0) (0xc001539a40) Stream removed, broadcasting: 5 May 24 21:45:26.535: INFO: Exec stderr: "" May 24 21:45:26.535: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4518 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:45:26.535: INFO: >>> kubeConfig: /root/.kube/config I0524 21:45:26.562570 6 log.go:172] (0xc004c61290) (0xc001539ea0) Create stream I0524 21:45:26.562609 6 log.go:172] (0xc004c61290) (0xc001539ea0) Stream added, broadcasting: 1 I0524 21:45:26.564947 6 log.go:172] (0xc004c61290) Reply frame received for 1 I0524 21:45:26.564994 6 log.go:172] (0xc004c61290) (0xc001444000) Create stream I0524 21:45:26.565014 6 log.go:172] (0xc004c61290) (0xc001444000) Stream added, broadcasting: 3 I0524 21:45:26.566388 6 log.go:172] (0xc004c61290) Reply frame received for 3 I0524 21:45:26.566441 6 log.go:172] (0xc004c61290) (0xc0028bc640) Create stream I0524 21:45:26.566460 6 log.go:172] (0xc004c61290) (0xc0028bc640) Stream added, broadcasting: 5 I0524 21:45:26.567296 6 log.go:172] (0xc004c61290) Reply frame received for 5 I0524 21:45:26.642606 6 log.go:172] (0xc004c61290) Data frame received for 5 I0524 21:45:26.642637 6 log.go:172] (0xc0028bc640) (5) Data frame handling I0524 21:45:26.642686 6 log.go:172] (0xc004c61290) Data frame received for 3 I0524 21:45:26.642726 6 log.go:172] (0xc001444000) (3) Data frame handling I0524 21:45:26.642752 6 log.go:172] (0xc001444000) (3) Data frame sent I0524 21:45:26.642777 6 log.go:172] (0xc004c61290) Data frame received for 3 I0524 21:45:26.642794 6 log.go:172] (0xc001444000) (3) Data frame handling I0524 21:45:26.643830 6 log.go:172] (0xc004c61290) Data frame received for 1 I0524 21:45:26.643848 6 log.go:172] (0xc001539ea0) (1) Data frame handling I0524 21:45:26.643859 6 log.go:172] (0xc001539ea0) (1) Data frame sent I0524 21:45:26.644011 6 log.go:172] (0xc004c61290) (0xc001539ea0) Stream removed, broadcasting: 1 I0524 21:45:26.644057 6 log.go:172] (0xc004c61290) Go away received I0524 21:45:26.644215 6 log.go:172] (0xc004c61290) (0xc001539ea0) Stream removed, broadcasting: 1 I0524 21:45:26.644238 6 log.go:172] (0xc004c61290) (0xc001444000) Stream removed, broadcasting: 3 I0524 21:45:26.644249 6 log.go:172] (0xc004c61290) (0xc0028bc640) Stream removed, broadcasting: 5 May 24 21:45:26.644: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:26.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4518" for this suite. • [SLOW TEST:11.232 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1688,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:26.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 21:45:30.767: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:31.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-161" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1688,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:31.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 24 21:45:31.602: INFO: created pod pod-service-account-defaultsa May 24 21:45:31.602: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 24 21:45:31.609: INFO: created pod pod-service-account-mountsa May 24 21:45:31.609: INFO: pod pod-service-account-mountsa service account token volume mount: true May 24 21:45:31.615: INFO: created pod pod-service-account-nomountsa May 24 21:45:31.615: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 24 21:45:31.634: INFO: created pod pod-service-account-defaultsa-mountspec May 24 21:45:31.634: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 24 21:45:31.652: INFO: created pod pod-service-account-mountsa-mountspec May 24 21:45:31.652: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 24 21:45:31.696: INFO: created pod pod-service-account-nomountsa-mountspec May 24 21:45:31.696: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 24 21:45:31.738: INFO: created pod pod-service-account-defaultsa-nomountspec May 24 21:45:31.738: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 24 21:45:31.769: INFO: created pod pod-service-account-mountsa-nomountspec May 24 21:45:31.769: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 24 21:45:31.820: INFO: created pod pod-service-account-nomountsa-nomountspec May 24 21:45:31.820: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:31.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-24" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":105,"skipped":1705,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:31.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:45:31.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30" in namespace "projected-4498" to be "success or failure" May 24 21:45:32.011: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Pending", Reason="", readiness=false. Elapsed: 34.767393ms May 24 21:45:34.014: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038089128s May 24 21:45:36.276: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299499222s May 24 21:45:38.404: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427340775s May 24 21:45:40.603: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626346378s May 24 21:45:42.796: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Pending", Reason="", readiness=false. Elapsed: 10.81979363s May 24 21:45:44.844: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Running", Reason="", readiness=true. Elapsed: 12.86782853s May 24 21:45:46.848: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.872087013s STEP: Saw pod success May 24 21:45:46.848: INFO: Pod "downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30" satisfied condition "success or failure" May 24 21:45:46.852: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30 container client-container: STEP: delete the pod May 24 21:45:46.874: INFO: Waiting for pod downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30 to disappear May 24 21:45:46.879: INFO: Pod downwardapi-volume-33ddf577-55c5-4b37-a66e-f8419e7ebf30 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:46.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4498" for this suite. • [SLOW TEST:15.007 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1719,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:46.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 24 21:45:47.043: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4208" to be "success or failure" May 24 21:45:47.119: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 76.633466ms May 24 21:45:49.123: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080786838s May 24 21:45:51.150: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107040648s May 24 21:45:53.154: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111554938s STEP: Saw pod success May 24 21:45:53.154: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 24 21:45:53.158: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 24 21:45:53.175: INFO: Waiting for pod pod-host-path-test to disappear May 24 21:45:53.263: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:53.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4208" for this suite. • [SLOW TEST:6.382 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:53.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f3d0ae30-12c1-4f24-97e7-11138343a43a STEP: Creating a pod to test consume configMaps May 24 21:45:53.338: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b" in namespace "projected-4206" to be "success or failure" May 24 21:45:53.353: INFO: Pod "pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.807246ms May 24 21:45:55.357: INFO: Pod "pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018326192s May 24 21:45:57.360: INFO: Pod "pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022223566s STEP: Saw pod success May 24 21:45:57.361: INFO: Pod "pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b" satisfied condition "success or failure" May 24 21:45:57.364: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b container projected-configmap-volume-test: STEP: delete the pod May 24 21:45:57.390: INFO: Waiting for pod pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b to disappear May 24 21:45:57.401: INFO: Pod pod-projected-configmaps-4698f552-354d-46c1-ae31-bd441e038f4b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:45:57.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4206" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1774,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:45:57.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6516 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 24 21:45:57.545: INFO: Found 0 stateful pods, waiting for 3 May 24 21:46:07.548: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 21:46:07.548: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 21:46:07.548: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 24 21:46:17.549: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 21:46:17.549: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 21:46:17.549: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 24 21:46:17.574: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 24 21:46:27.614: INFO: Updating stateful set ss2 May 24 21:46:27.636: INFO: Waiting for Pod statefulset-6516/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 24 21:46:37.983: INFO: Found 2 stateful pods, waiting for 3 May 24 21:46:47.990: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 21:46:47.990: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 21:46:47.990: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 24 21:46:48.011: INFO: Updating stateful set ss2 May 24 21:46:48.016: INFO: Waiting for Pod statefulset-6516/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 21:46:58.025: INFO: Waiting for Pod statefulset-6516/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 21:47:08.039: INFO: Updating stateful set ss2 May 24 21:47:08.055: INFO: Waiting for StatefulSet statefulset-6516/ss2 to complete update May 24 21:47:08.055: INFO: Waiting for Pod statefulset-6516/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 21:47:18.063: INFO: Waiting for StatefulSet statefulset-6516/ss2 to complete update May 24 21:47:18.063: INFO: Waiting for Pod statefulset-6516/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 24 21:47:28.063: INFO: Deleting all statefulset in ns statefulset-6516 May 24 21:47:28.066: INFO: Scaling statefulset ss2 to 0 May 24 21:47:58.103: INFO: Waiting for statefulset status.replicas updated to 0 May 24 21:47:58.106: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:47:58.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6516" for this suite. • [SLOW TEST:120.738 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":109,"skipped":1776,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:47:58.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9637 STEP: creating replication controller nodeport-test in namespace services-9637 I0524 21:47:58.272018 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9637, replica count: 2 I0524 21:48:01.322476 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:48:04.322741 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 21:48:04.322: INFO: Creating new exec pod May 24 21:48:09.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9637 execpod24qsz -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 24 21:48:09.589: INFO: stderr: "I0524 21:48:09.485399 1106 log.go:172] (0xc0000f4420) (0xc0007a28c0) Create stream\nI0524 21:48:09.485456 1106 log.go:172] (0xc0000f4420) (0xc0007a28c0) Stream added, broadcasting: 1\nI0524 21:48:09.488271 1106 log.go:172] (0xc0000f4420) Reply frame received for 1\nI0524 21:48:09.488310 1106 log.go:172] (0xc0000f4420) (0xc0009d8000) Create stream\nI0524 21:48:09.488320 1106 log.go:172] (0xc0000f4420) (0xc0009d8000) Stream added, broadcasting: 3\nI0524 21:48:09.489565 1106 log.go:172] (0xc0000f4420) Reply frame received for 3\nI0524 21:48:09.489712 1106 log.go:172] (0xc0000f4420) (0xc0005edb80) Create stream\nI0524 21:48:09.489740 1106 log.go:172] (0xc0000f4420) (0xc0005edb80) Stream added, broadcasting: 5\nI0524 21:48:09.490881 1106 log.go:172] (0xc0000f4420) Reply frame received for 5\nI0524 21:48:09.576968 1106 log.go:172] (0xc0000f4420) Data frame received for 5\nI0524 21:48:09.576992 1106 log.go:172] (0xc0005edb80) (5) Data frame handling\nI0524 21:48:09.577004 1106 log.go:172] (0xc0005edb80) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0524 21:48:09.581648 1106 log.go:172] (0xc0000f4420) Data frame received for 5\nI0524 21:48:09.581674 1106 log.go:172] (0xc0005edb80) (5) Data frame handling\nI0524 21:48:09.581690 1106 log.go:172] (0xc0005edb80) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0524 21:48:09.581827 1106 log.go:172] (0xc0000f4420) Data frame received for 5\nI0524 21:48:09.581847 1106 log.go:172] (0xc0005edb80) (5) Data frame handling\nI0524 21:48:09.582187 1106 log.go:172] (0xc0000f4420) Data frame received for 3\nI0524 21:48:09.582206 1106 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0524 21:48:09.583549 1106 log.go:172] (0xc0000f4420) Data frame received for 1\nI0524 21:48:09.583575 1106 log.go:172] (0xc0007a28c0) (1) Data frame handling\nI0524 21:48:09.583595 1106 log.go:172] (0xc0007a28c0) (1) Data frame sent\nI0524 21:48:09.583626 1106 log.go:172] (0xc0000f4420) (0xc0007a28c0) Stream removed, broadcasting: 1\nI0524 21:48:09.583711 1106 log.go:172] (0xc0000f4420) Go away received\nI0524 21:48:09.584056 1106 log.go:172] (0xc0000f4420) (0xc0007a28c0) Stream removed, broadcasting: 1\nI0524 21:48:09.584094 1106 log.go:172] (0xc0000f4420) (0xc0009d8000) Stream removed, broadcasting: 3\nI0524 21:48:09.584114 1106 log.go:172] (0xc0000f4420) (0xc0005edb80) Stream removed, broadcasting: 5\n" May 24 21:48:09.589: INFO: stdout: "" May 24 21:48:09.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9637 execpod24qsz -- /bin/sh -x -c nc -zv -t -w 2 10.110.38.81 80' May 24 21:48:09.810: INFO: stderr: "I0524 21:48:09.734318 1128 log.go:172] (0xc0007a89a0) (0xc000790000) Create stream\nI0524 21:48:09.734381 1128 log.go:172] (0xc0007a89a0) (0xc000790000) Stream added, broadcasting: 1\nI0524 21:48:09.736889 1128 log.go:172] (0xc0007a89a0) Reply frame received for 1\nI0524 21:48:09.736935 1128 log.go:172] (0xc0007a89a0) (0xc00095c000) Create stream\nI0524 21:48:09.736948 1128 log.go:172] (0xc0007a89a0) (0xc00095c000) Stream added, broadcasting: 3\nI0524 21:48:09.737972 1128 log.go:172] (0xc0007a89a0) Reply frame received for 3\nI0524 21:48:09.738004 1128 log.go:172] (0xc0007a89a0) (0xc00095c0a0) Create stream\nI0524 21:48:09.738015 1128 log.go:172] (0xc0007a89a0) (0xc00095c0a0) Stream added, broadcasting: 5\nI0524 21:48:09.738905 1128 log.go:172] (0xc0007a89a0) Reply frame received for 5\nI0524 21:48:09.802079 1128 log.go:172] (0xc0007a89a0) Data frame received for 5\nI0524 21:48:09.802110 1128 log.go:172] (0xc00095c0a0) (5) Data frame handling\nI0524 21:48:09.802127 1128 log.go:172] (0xc00095c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.110.38.81 80\nI0524 21:48:09.802422 1128 log.go:172] (0xc0007a89a0) Data frame received for 5\nI0524 21:48:09.802441 1128 log.go:172] (0xc00095c0a0) (5) Data frame handling\nI0524 21:48:09.802456 1128 log.go:172] (0xc00095c0a0) (5) Data frame sent\nConnection to 10.110.38.81 80 port [tcp/http] succeeded!\nI0524 21:48:09.802871 1128 log.go:172] (0xc0007a89a0) Data frame received for 5\nI0524 21:48:09.802895 1128 log.go:172] (0xc00095c0a0) (5) Data frame handling\nI0524 21:48:09.803069 1128 log.go:172] (0xc0007a89a0) Data frame received for 3\nI0524 21:48:09.803098 1128 log.go:172] (0xc00095c000) (3) Data frame handling\nI0524 21:48:09.804629 1128 log.go:172] (0xc0007a89a0) Data frame received for 1\nI0524 21:48:09.804661 1128 log.go:172] (0xc000790000) (1) Data frame handling\nI0524 21:48:09.804683 1128 log.go:172] (0xc000790000) (1) Data frame sent\nI0524 21:48:09.804694 1128 log.go:172] (0xc0007a89a0) (0xc000790000) Stream removed, broadcasting: 1\nI0524 21:48:09.804719 1128 log.go:172] (0xc0007a89a0) Go away received\nI0524 21:48:09.805361 1128 log.go:172] (0xc0007a89a0) (0xc000790000) Stream removed, broadcasting: 1\nI0524 21:48:09.805380 1128 log.go:172] (0xc0007a89a0) (0xc00095c000) Stream removed, broadcasting: 3\nI0524 21:48:09.805390 1128 log.go:172] (0xc0007a89a0) (0xc00095c0a0) Stream removed, broadcasting: 5\n" May 24 21:48:09.810: INFO: stdout: "" May 24 21:48:09.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9637 execpod24qsz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30214' May 24 21:48:10.023: INFO: stderr: "I0524 21:48:09.945543 1149 log.go:172] (0xc0005c7080) (0xc00070fa40) Create stream\nI0524 21:48:09.945596 1149 log.go:172] (0xc0005c7080) (0xc00070fa40) Stream added, broadcasting: 1\nI0524 21:48:09.948037 1149 log.go:172] (0xc0005c7080) Reply frame received for 1\nI0524 21:48:09.948074 1149 log.go:172] (0xc0005c7080) (0xc000aa8000) Create stream\nI0524 21:48:09.948087 1149 log.go:172] (0xc0005c7080) (0xc000aa8000) Stream added, broadcasting: 3\nI0524 21:48:09.948956 1149 log.go:172] (0xc0005c7080) Reply frame received for 3\nI0524 21:48:09.948994 1149 log.go:172] (0xc0005c7080) (0xc000aa80a0) Create stream\nI0524 21:48:09.949008 1149 log.go:172] (0xc0005c7080) (0xc000aa80a0) Stream added, broadcasting: 5\nI0524 21:48:09.950129 1149 log.go:172] (0xc0005c7080) Reply frame received for 5\nI0524 21:48:10.015085 1149 log.go:172] (0xc0005c7080) Data frame received for 5\nI0524 21:48:10.015129 1149 log.go:172] (0xc000aa80a0) (5) Data frame handling\nI0524 21:48:10.015201 1149 log.go:172] (0xc000aa80a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30214\nI0524 21:48:10.015235 1149 log.go:172] (0xc0005c7080) Data frame received for 5\nI0524 21:48:10.015256 1149 log.go:172] (0xc000aa80a0) (5) Data frame handling\nI0524 21:48:10.015282 1149 log.go:172] (0xc000aa80a0) (5) Data frame sent\nConnection to 172.17.0.10 30214 port [tcp/30214] succeeded!\nI0524 21:48:10.015711 1149 log.go:172] (0xc0005c7080) Data frame received for 3\nI0524 21:48:10.015731 1149 log.go:172] (0xc000aa8000) (3) Data frame handling\nI0524 21:48:10.015781 1149 log.go:172] (0xc0005c7080) Data frame received for 5\nI0524 21:48:10.015831 1149 log.go:172] (0xc000aa80a0) (5) Data frame handling\nI0524 21:48:10.017641 1149 log.go:172] (0xc0005c7080) Data frame received for 1\nI0524 21:48:10.017654 1149 log.go:172] (0xc00070fa40) (1) Data frame handling\nI0524 21:48:10.017666 1149 log.go:172] (0xc00070fa40) (1) Data frame sent\nI0524 21:48:10.017676 1149 log.go:172] (0xc0005c7080) (0xc00070fa40) Stream removed, broadcasting: 1\nI0524 21:48:10.017913 1149 log.go:172] (0xc0005c7080) Go away received\nI0524 21:48:10.017946 1149 log.go:172] (0xc0005c7080) (0xc00070fa40) Stream removed, broadcasting: 1\nI0524 21:48:10.017971 1149 log.go:172] (0xc0005c7080) (0xc000aa8000) Stream removed, broadcasting: 3\nI0524 21:48:10.017982 1149 log.go:172] (0xc0005c7080) (0xc000aa80a0) Stream removed, broadcasting: 5\n" May 24 21:48:10.023: INFO: stdout: "" May 24 21:48:10.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9637 execpod24qsz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30214' May 24 21:48:10.221: INFO: stderr: "I0524 21:48:10.158900 1170 log.go:172] (0xc0005a0630) (0xc0008be000) Create stream\nI0524 21:48:10.158976 1170 log.go:172] (0xc0005a0630) (0xc0008be000) Stream added, broadcasting: 1\nI0524 21:48:10.162618 1170 log.go:172] (0xc0005a0630) Reply frame received for 1\nI0524 21:48:10.162650 1170 log.go:172] (0xc0005a0630) (0xc0008be0a0) Create stream\nI0524 21:48:10.162659 1170 log.go:172] (0xc0005a0630) (0xc0008be0a0) Stream added, broadcasting: 3\nI0524 21:48:10.163595 1170 log.go:172] (0xc0005a0630) Reply frame received for 3\nI0524 21:48:10.163645 1170 log.go:172] (0xc0005a0630) (0xc000611a40) Create stream\nI0524 21:48:10.163659 1170 log.go:172] (0xc0005a0630) (0xc000611a40) Stream added, broadcasting: 5\nI0524 21:48:10.164468 1170 log.go:172] (0xc0005a0630) Reply frame received for 5\nI0524 21:48:10.213830 1170 log.go:172] (0xc0005a0630) Data frame received for 5\nI0524 21:48:10.213856 1170 log.go:172] (0xc000611a40) (5) Data frame handling\nI0524 21:48:10.213878 1170 log.go:172] (0xc000611a40) (5) Data frame sent\nI0524 21:48:10.213888 1170 log.go:172] (0xc0005a0630) Data frame received for 5\nI0524 21:48:10.213896 1170 log.go:172] (0xc000611a40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30214\nConnection to 172.17.0.8 30214 port [tcp/30214] succeeded!\nI0524 21:48:10.214399 1170 log.go:172] (0xc0005a0630) Data frame received for 3\nI0524 21:48:10.214428 1170 log.go:172] (0xc0008be0a0) (3) Data frame handling\nI0524 21:48:10.215608 1170 log.go:172] (0xc0005a0630) Data frame received for 1\nI0524 21:48:10.215637 1170 log.go:172] (0xc0008be000) (1) Data frame handling\nI0524 21:48:10.215653 1170 log.go:172] (0xc0008be000) (1) Data frame sent\nI0524 21:48:10.215681 1170 log.go:172] (0xc0005a0630) (0xc0008be000) Stream removed, broadcasting: 1\nI0524 21:48:10.215707 1170 log.go:172] (0xc0005a0630) Go away received\nI0524 21:48:10.216171 1170 log.go:172] (0xc0005a0630) (0xc0008be000) Stream removed, broadcasting: 1\nI0524 21:48:10.216210 1170 log.go:172] (0xc0005a0630) (0xc0008be0a0) Stream removed, broadcasting: 3\nI0524 21:48:10.216235 1170 log.go:172] (0xc0005a0630) (0xc000611a40) Stream removed, broadcasting: 5\n" May 24 21:48:10.221: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:48:10.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9637" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.083 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":110,"skipped":1782,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:48:10.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-353 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 21:48:10.281: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 21:48:34.430: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.204:8080/dial?request=hostname&protocol=udp&host=10.244.1.154&port=8081&tries=1'] Namespace:pod-network-test-353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:48:34.430: INFO: >>> kubeConfig: /root/.kube/config I0524 21:48:34.455219 6 log.go:172] (0xc0016cc4d0) (0xc001538b40) Create stream I0524 21:48:34.455245 6 log.go:172] (0xc0016cc4d0) (0xc001538b40) Stream added, broadcasting: 1 I0524 21:48:34.457059 6 log.go:172] (0xc0016cc4d0) Reply frame received for 1 I0524 21:48:34.457085 6 log.go:172] (0xc0016cc4d0) (0xc001538e60) Create stream I0524 21:48:34.457094 6 log.go:172] (0xc0016cc4d0) (0xc001538e60) Stream added, broadcasting: 3 I0524 21:48:34.458308 6 log.go:172] (0xc0016cc4d0) Reply frame received for 3 I0524 21:48:34.458347 6 log.go:172] (0xc0016cc4d0) (0xc002318000) Create stream I0524 21:48:34.458361 6 log.go:172] (0xc0016cc4d0) (0xc002318000) Stream added, broadcasting: 5 I0524 21:48:34.459461 6 log.go:172] (0xc0016cc4d0) Reply frame received for 5 I0524 21:48:34.534105 6 log.go:172] (0xc0016cc4d0) Data frame received for 3 I0524 21:48:34.534141 6 log.go:172] (0xc001538e60) (3) Data frame handling I0524 21:48:34.534161 6 log.go:172] (0xc001538e60) (3) Data frame sent I0524 21:48:34.534952 6 log.go:172] (0xc0016cc4d0) Data frame received for 3 I0524 21:48:34.534982 6 log.go:172] (0xc001538e60) (3) Data frame handling I0524 21:48:34.535147 6 log.go:172] (0xc0016cc4d0) Data frame received for 5 I0524 21:48:34.535179 6 log.go:172] (0xc002318000) (5) Data frame handling I0524 21:48:34.536638 6 log.go:172] (0xc0016cc4d0) Data frame received for 1 I0524 21:48:34.536660 6 log.go:172] (0xc001538b40) (1) Data frame handling I0524 21:48:34.536680 6 log.go:172] (0xc001538b40) (1) Data frame sent I0524 21:48:34.536696 6 log.go:172] (0xc0016cc4d0) (0xc001538b40) Stream removed, broadcasting: 1 I0524 21:48:34.536709 6 log.go:172] (0xc0016cc4d0) Go away received I0524 21:48:34.536810 6 log.go:172] (0xc0016cc4d0) (0xc001538b40) Stream removed, broadcasting: 1 I0524 21:48:34.536839 6 log.go:172] (0xc0016cc4d0) (0xc001538e60) Stream removed, broadcasting: 3 I0524 21:48:34.536855 6 log.go:172] (0xc0016cc4d0) (0xc002318000) Stream removed, broadcasting: 5 May 24 21:48:34.536: INFO: Waiting for responses: map[] May 24 21:48:34.540: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.204:8080/dial?request=hostname&protocol=udp&host=10.244.2.203&port=8081&tries=1'] Namespace:pod-network-test-353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:48:34.540: INFO: >>> kubeConfig: /root/.kube/config I0524 21:48:34.575073 6 log.go:172] (0xc0017062c0) (0xc002318460) Create stream I0524 21:48:34.575110 6 log.go:172] (0xc0017062c0) (0xc002318460) Stream added, broadcasting: 1 I0524 21:48:34.577058 6 log.go:172] (0xc0017062c0) Reply frame received for 1 I0524 21:48:34.577288 6 log.go:172] (0xc0017062c0) (0xc0028bd5e0) Create stream I0524 21:48:34.577307 6 log.go:172] (0xc0017062c0) (0xc0028bd5e0) Stream added, broadcasting: 3 I0524 21:48:34.578322 6 log.go:172] (0xc0017062c0) Reply frame received for 3 I0524 21:48:34.578349 6 log.go:172] (0xc0017062c0) (0xc0028bd720) Create stream I0524 21:48:34.578367 6 log.go:172] (0xc0017062c0) (0xc0028bd720) Stream added, broadcasting: 5 I0524 21:48:34.579302 6 log.go:172] (0xc0017062c0) Reply frame received for 5 I0524 21:48:34.632713 6 log.go:172] (0xc0017062c0) Data frame received for 3 I0524 21:48:34.632734 6 log.go:172] (0xc0028bd5e0) (3) Data frame handling I0524 21:48:34.632747 6 log.go:172] (0xc0028bd5e0) (3) Data frame sent I0524 21:48:34.633430 6 log.go:172] (0xc0017062c0) Data frame received for 5 I0524 21:48:34.633468 6 log.go:172] (0xc0028bd720) (5) Data frame handling I0524 21:48:34.633500 6 log.go:172] (0xc0017062c0) Data frame received for 3 I0524 21:48:34.633521 6 log.go:172] (0xc0028bd5e0) (3) Data frame handling I0524 21:48:34.634703 6 log.go:172] (0xc0017062c0) Data frame received for 1 I0524 21:48:34.634732 6 log.go:172] (0xc002318460) (1) Data frame handling I0524 21:48:34.634741 6 log.go:172] (0xc002318460) (1) Data frame sent I0524 21:48:34.634809 6 log.go:172] (0xc0017062c0) (0xc002318460) Stream removed, broadcasting: 1 I0524 21:48:34.634854 6 log.go:172] (0xc0017062c0) Go away received I0524 21:48:34.634937 6 log.go:172] (0xc0017062c0) (0xc002318460) Stream removed, broadcasting: 1 I0524 21:48:34.634955 6 log.go:172] (0xc0017062c0) (0xc0028bd5e0) Stream removed, broadcasting: 3 I0524 21:48:34.634963 6 log.go:172] (0xc0017062c0) (0xc0028bd720) Stream removed, broadcasting: 5 May 24 21:48:34.635: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:48:34.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-353" for this suite. • [SLOW TEST:24.412 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:48:34.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 24 21:48:34.756: INFO: Created pod &Pod{ObjectMeta:{dns-8691 dns-8691 /api/v1/namespaces/dns-8691/pods/dns-8691 c87ecd13-9148-4022-a03d-b8fc251f8748 18859563 0 2020-05-24 21:48:34 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r55rt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r55rt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r55rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 24 21:48:38.763: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8691 PodName:dns-8691 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:48:38.764: INFO: >>> kubeConfig: /root/.kube/config I0524 21:48:38.800437 6 log.go:172] (0xc001706dc0) (0xc002319360) Create stream I0524 21:48:38.800465 6 log.go:172] (0xc001706dc0) (0xc002319360) Stream added, broadcasting: 1 I0524 21:48:38.802653 6 log.go:172] (0xc001706dc0) Reply frame received for 1 I0524 21:48:38.802677 6 log.go:172] (0xc001706dc0) (0xc0023194a0) Create stream I0524 21:48:38.802685 6 log.go:172] (0xc001706dc0) (0xc0023194a0) Stream added, broadcasting: 3 I0524 21:48:38.803640 6 log.go:172] (0xc001706dc0) Reply frame received for 3 I0524 21:48:38.803692 6 log.go:172] (0xc001706dc0) (0xc0015390e0) Create stream I0524 21:48:38.803711 6 log.go:172] (0xc001706dc0) (0xc0015390e0) Stream added, broadcasting: 5 I0524 21:48:38.804788 6 log.go:172] (0xc001706dc0) Reply frame received for 5 I0524 21:48:38.940414 6 log.go:172] (0xc001706dc0) Data frame received for 3 I0524 21:48:38.940446 6 log.go:172] (0xc0023194a0) (3) Data frame handling I0524 21:48:38.940466 6 log.go:172] (0xc0023194a0) (3) Data frame sent I0524 21:48:38.941843 6 log.go:172] (0xc001706dc0) Data frame received for 3 I0524 21:48:38.941887 6 log.go:172] (0xc0023194a0) (3) Data frame handling I0524 21:48:38.942037 6 log.go:172] (0xc001706dc0) Data frame received for 5 I0524 21:48:38.942067 6 log.go:172] (0xc0015390e0) (5) Data frame handling I0524 21:48:38.944063 6 log.go:172] (0xc001706dc0) Data frame received for 1 I0524 21:48:38.944107 6 log.go:172] (0xc002319360) (1) Data frame handling I0524 21:48:38.944136 6 log.go:172] (0xc002319360) (1) Data frame sent I0524 21:48:38.944155 6 log.go:172] (0xc001706dc0) (0xc002319360) Stream removed, broadcasting: 1 I0524 21:48:38.944185 6 log.go:172] (0xc001706dc0) Go away received I0524 21:48:38.944349 6 log.go:172] (0xc001706dc0) (0xc002319360) Stream removed, broadcasting: 1 I0524 21:48:38.944389 6 log.go:172] (0xc001706dc0) (0xc0023194a0) Stream removed, broadcasting: 3 I0524 21:48:38.944416 6 log.go:172] (0xc001706dc0) (0xc0015390e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 24 21:48:38.944: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8691 PodName:dns-8691 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:48:38.944: INFO: >>> kubeConfig: /root/.kube/config I0524 21:48:38.981925 6 log.go:172] (0xc001707550) (0xc002319860) Create stream I0524 21:48:38.981963 6 log.go:172] (0xc001707550) (0xc002319860) Stream added, broadcasting: 1 I0524 21:48:38.983686 6 log.go:172] (0xc001707550) Reply frame received for 1 I0524 21:48:38.983726 6 log.go:172] (0xc001707550) (0xc001539180) Create stream I0524 21:48:38.983740 6 log.go:172] (0xc001707550) (0xc001539180) Stream added, broadcasting: 3 I0524 21:48:38.984676 6 log.go:172] (0xc001707550) Reply frame received for 3 I0524 21:48:38.984733 6 log.go:172] (0xc001707550) (0xc0015392c0) Create stream I0524 21:48:38.984751 6 log.go:172] (0xc001707550) (0xc0015392c0) Stream added, broadcasting: 5 I0524 21:48:38.986084 6 log.go:172] (0xc001707550) Reply frame received for 5 I0524 21:48:39.068367 6 log.go:172] (0xc001707550) Data frame received for 3 I0524 21:48:39.068395 6 log.go:172] (0xc001539180) (3) Data frame handling I0524 21:48:39.068413 6 log.go:172] (0xc001539180) (3) Data frame sent I0524 21:48:39.069774 6 log.go:172] (0xc001707550) Data frame received for 5 I0524 21:48:39.069810 6 log.go:172] (0xc0015392c0) (5) Data frame handling I0524 21:48:39.069969 6 log.go:172] (0xc001707550) Data frame received for 3 I0524 21:48:39.069995 6 log.go:172] (0xc001539180) (3) Data frame handling I0524 21:48:39.071167 6 log.go:172] (0xc001707550) Data frame received for 1 I0524 21:48:39.071192 6 log.go:172] (0xc002319860) (1) Data frame handling I0524 21:48:39.071212 6 log.go:172] (0xc002319860) (1) Data frame sent I0524 21:48:39.071238 6 log.go:172] (0xc001707550) (0xc002319860) Stream removed, broadcasting: 1 I0524 21:48:39.071269 6 log.go:172] (0xc001707550) Go away received I0524 21:48:39.071403 6 log.go:172] (0xc001707550) (0xc002319860) Stream removed, broadcasting: 1 I0524 21:48:39.071425 6 log.go:172] (0xc001707550) (0xc001539180) Stream removed, broadcasting: 3 I0524 21:48:39.071433 6 log.go:172] (0xc001707550) (0xc0015392c0) Stream removed, broadcasting: 5 May 24 21:48:39.071: INFO: Deleting pod dns-8691... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:48:39.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8691" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":112,"skipped":1829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:48:39.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:48:39.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3357" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":113,"skipped":1852,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:48:39.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:48:41.387: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:48:43.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:48:45.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:48:48.440: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:48:48.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:48:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1855" for this suite. STEP: Destroying namespace "webhook-1855-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.258 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":114,"skipped":1864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:48:49.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:48:49.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe" in namespace "projected-4495" to be "success or failure" May 24 21:48:49.913: INFO: Pod "downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.965866ms May 24 21:48:51.938: INFO: Pod "downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029083619s May 24 21:48:53.942: INFO: Pod "downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe": Phase="Running", Reason="", readiness=true. Elapsed: 4.033134255s May 24 21:48:55.947: INFO: Pod "downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037842198s STEP: Saw pod success May 24 21:48:55.947: INFO: Pod "downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe" satisfied condition "success or failure" May 24 21:48:55.950: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe container client-container: STEP: delete the pod May 24 21:48:56.098: INFO: Waiting for pod downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe to disappear May 24 21:48:56.100: INFO: Pod downwardapi-volume-e874f165-ee8b-4863-a107-d6db82dbeffe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:48:56.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4495" for this suite. • [SLOW TEST:6.256 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:48:56.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:48:56.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-72" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":116,"skipped":1937,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:48:56.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2659/configmap-test-3256e8bd-7728-421e-b18b-d0364f2b082f STEP: Creating a pod to test consume configMaps May 24 21:48:56.247: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f" in namespace "configmap-2659" to be "success or failure" May 24 21:48:56.251: INFO: Pod "pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216838ms May 24 21:48:58.256: INFO: Pod "pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008687348s May 24 21:49:00.260: INFO: Pod "pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01260839s STEP: Saw pod success May 24 21:49:00.260: INFO: Pod "pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f" satisfied condition "success or failure" May 24 21:49:00.262: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f container env-test: STEP: delete the pod May 24 21:49:00.289: INFO: Waiting for pod pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f to disappear May 24 21:49:00.379: INFO: Pod pod-configmaps-6ca0b10e-9ece-4d69-a539-b913c62a591f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:00.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2659" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1940,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:00.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-9shq STEP: Creating a pod to test atomic-volume-subpath May 24 21:49:00.475: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9shq" in namespace "subpath-8809" to be "success or failure" May 24 21:49:00.518: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Pending", Reason="", readiness=false. Elapsed: 42.676312ms May 24 21:49:02.522: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047021342s May 24 21:49:04.535: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 4.059484669s May 24 21:49:06.539: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 6.063737322s May 24 21:49:08.543: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 8.068154784s May 24 21:49:10.548: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 10.072742805s May 24 21:49:12.552: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 12.077146464s May 24 21:49:14.557: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 14.081852446s May 24 21:49:16.561: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 16.085933707s May 24 21:49:18.565: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 18.090169856s May 24 21:49:20.570: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 20.094910977s May 24 21:49:22.574: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Running", Reason="", readiness=true. Elapsed: 22.09854721s May 24 21:49:24.577: INFO: Pod "pod-subpath-test-configmap-9shq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.102250131s STEP: Saw pod success May 24 21:49:24.577: INFO: Pod "pod-subpath-test-configmap-9shq" satisfied condition "success or failure" May 24 21:49:24.580: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-9shq container test-container-subpath-configmap-9shq: STEP: delete the pod May 24 21:49:24.730: INFO: Waiting for pod pod-subpath-test-configmap-9shq to disappear May 24 21:49:24.734: INFO: Pod pod-subpath-test-configmap-9shq no longer exists STEP: Deleting pod pod-subpath-test-configmap-9shq May 24 21:49:24.734: INFO: Deleting pod "pod-subpath-test-configmap-9shq" in namespace "subpath-8809" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:24.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8809" for this suite. • [SLOW TEST:24.355 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":118,"skipped":1956,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:24.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-4eb112c9-cf99-434e-a66f-e35dd794b8b3 STEP: Creating a pod to test consume secrets May 24 21:49:24.843: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6" in namespace "projected-4145" to be "success or failure" May 24 21:49:24.868: INFO: Pod "pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.150904ms May 24 21:49:26.985: INFO: Pod "pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14231278s May 24 21:49:28.990: INFO: Pod "pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147464988s STEP: Saw pod success May 24 21:49:28.990: INFO: Pod "pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6" satisfied condition "success or failure" May 24 21:49:28.994: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6 container projected-secret-volume-test: STEP: delete the pod May 24 21:49:29.037: INFO: Waiting for pod pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6 to disappear May 24 21:49:29.042: INFO: Pod pod-projected-secrets-8fc4b435-6f51-4465-9a73-46c604d644f6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:29.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4145" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2020,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:29.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:49:29.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382" in namespace "downward-api-1687" to be "success or failure" May 24 21:49:29.132: INFO: Pod "downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382": Phase="Pending", Reason="", readiness=false. Elapsed: 8.766451ms May 24 21:49:31.159: INFO: Pod "downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035361661s May 24 21:49:33.163: INFO: Pod "downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039416341s STEP: Saw pod success May 24 21:49:33.163: INFO: Pod "downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382" satisfied condition "success or failure" May 24 21:49:33.166: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382 container client-container: STEP: delete the pod May 24 21:49:33.181: INFO: Waiting for pod downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382 to disappear May 24 21:49:33.230: INFO: Pod downwardapi-volume-c14f67b9-11a3-49a7-a06d-30fc66c8d382 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:33.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1687" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2023,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:33.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:49:33.283: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 21:49:35.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7854 create -f -' May 24 21:49:38.677: INFO: stderr: "" May 24 21:49:38.677: INFO: stdout: "e2e-test-crd-publish-openapi-5190-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 24 21:49:38.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7854 delete e2e-test-crd-publish-openapi-5190-crds test-cr' May 24 21:49:38.764: INFO: stderr: "" May 24 21:49:38.764: INFO: stdout: "e2e-test-crd-publish-openapi-5190-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 24 21:49:38.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7854 apply -f -' May 24 21:49:39.000: INFO: stderr: "" May 24 21:49:39.000: INFO: stdout: "e2e-test-crd-publish-openapi-5190-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 24 21:49:39.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7854 delete e2e-test-crd-publish-openapi-5190-crds test-cr' May 24 21:49:39.115: INFO: stderr: "" May 24 21:49:39.115: INFO: stdout: "e2e-test-crd-publish-openapi-5190-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 24 21:49:39.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5190-crds' May 24 21:49:39.343: INFO: stderr: "" May 24 21:49:39.343: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5190-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:42.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7854" for this suite. • [SLOW TEST:8.992 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":121,"skipped":2037,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:42.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 24 21:49:42.305: INFO: Waiting up to 5m0s for pod "client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0" in namespace "containers-9792" to be "success or failure" May 24 21:49:42.309: INFO: Pod "client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097546ms May 24 21:49:44.314: INFO: Pod "client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008962912s May 24 21:49:46.318: INFO: Pod "client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012567893s STEP: Saw pod success May 24 21:49:46.318: INFO: Pod "client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0" satisfied condition "success or failure" May 24 21:49:46.320: INFO: Trying to get logs from node jerma-worker pod client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0 container test-container: STEP: delete the pod May 24 21:49:46.462: INFO: Waiting for pod client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0 to disappear May 24 21:49:46.495: INFO: Pod client-containers-c8ba37c1-e32b-4d20-b8e2-51f6e9d024e0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:46.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9792" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2057,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:46.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:49:47.400: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"08c69e51-cb38-43df-8c15-fd05d6e8a8e0", Controller:(*bool)(0xc0051f3a52), BlockOwnerDeletion:(*bool)(0xc0051f3a53)}} May 24 21:49:47.450: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4044ee90-a6cc-4a41-ba55-41a3f73b7c50", Controller:(*bool)(0xc00523ea42), BlockOwnerDeletion:(*bool)(0xc00523ea43)}} May 24 21:49:47.506: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"80175055-d4b3-4d68-8173-a6943d6366a4", Controller:(*bool)(0xc005208d0a), BlockOwnerDeletion:(*bool)(0xc005208d0b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:52.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1899" for this suite. • [SLOW TEST:6.041 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":123,"skipped":2074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:52.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 24 21:49:57.173: INFO: Successfully updated pod "annotationupdatedcd8eb6b-7b69-47ba-937f-bbce6fdb8a4a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:59.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8826" for this suite. • [SLOW TEST:6.667 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:59.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 24 21:49:59.291: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:49:59.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1413" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":125,"skipped":2148,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:49:59.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-795d15b4-c34a-44e2-829b-118b0cbfae68 STEP: Creating configMap with name cm-test-opt-upd-eb647f80-d022-4a85-a7f2-ce4567038bae STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-795d15b4-c34a-44e2-829b-118b0cbfae68 STEP: Updating configmap cm-test-opt-upd-eb647f80-d022-4a85-a7f2-ce4567038bae STEP: Creating configMap with name cm-test-opt-create-f89d459d-c836-4d70-81a1-34735b4cf824 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:09.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8912" for this suite. • [SLOW TEST:10.270 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2152,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:09.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 24 21:50:09.728: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 21:50:09.746: INFO: Waiting for terminating namespaces to be deleted... May 24 21:50:09.749: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 24 21:50:09.754: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:50:09.754: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:50:09.754: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:50:09.754: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:50:09.754: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 24 21:50:09.760: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:50:09.760: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:50:09.760: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 24 21:50:09.760: INFO: Container kube-hunter ready: false, restart count 0 May 24 21:50:09.760: INFO: pod-projected-configmaps-5f68d92f-6b4e-40bb-b7c7-434ae7946dc2 from projected-8912 started at 2020-05-24 21:49:59 +0000 UTC (3 container statuses recorded) May 24 21:50:09.760: INFO: Container createcm-volume-test ready: true, restart count 0 May 24 21:50:09.760: INFO: Container delcm-volume-test ready: true, restart count 0 May 24 21:50:09.760: INFO: Container updcm-volume-test ready: true, restart count 0 May 24 21:50:09.760: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:50:09.760: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:50:09.760: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 24 21:50:09.760: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ec679690-f258-46ad-b988-b5d489eeb040 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-ec679690-f258-46ad-b988-b5d489eeb040 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ec679690-f258-46ad-b988-b5d489eeb040 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:28.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1093" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.361 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":127,"skipped":2163,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:28.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 24 21:50:28.060: INFO: Waiting up to 5m0s for pod "downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa" in namespace "downward-api-1694" to be "success or failure" May 24 21:50:28.065: INFO: Pod "downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.882315ms May 24 21:50:30.102: INFO: Pod "downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041093641s May 24 21:50:32.107: INFO: Pod "downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046250436s STEP: Saw pod success May 24 21:50:32.107: INFO: Pod "downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa" satisfied condition "success or failure" May 24 21:50:32.110: INFO: Trying to get logs from node jerma-worker pod downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa container dapi-container: STEP: delete the pod May 24 21:50:32.133: INFO: Waiting for pod downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa to disappear May 24 21:50:32.137: INFO: Pod downward-api-d68779ee-afce-44d1-8c3a-6750d430afaa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:32.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1694" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:32.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:50:32.222: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:33.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8226" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":129,"skipped":2247,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:33.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 24 21:50:33.877: INFO: Waiting up to 5m0s for pod "client-containers-201b4617-36b2-4f2a-b781-d05950265877" in namespace "containers-5726" to be "success or failure" May 24 21:50:33.888: INFO: Pod "client-containers-201b4617-36b2-4f2a-b781-d05950265877": Phase="Pending", Reason="", readiness=false. Elapsed: 11.076186ms May 24 21:50:35.985: INFO: Pod "client-containers-201b4617-36b2-4f2a-b781-d05950265877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108142273s May 24 21:50:37.989: INFO: Pod "client-containers-201b4617-36b2-4f2a-b781-d05950265877": Phase="Running", Reason="", readiness=true. Elapsed: 4.111246249s May 24 21:50:39.993: INFO: Pod "client-containers-201b4617-36b2-4f2a-b781-d05950265877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115652435s STEP: Saw pod success May 24 21:50:39.993: INFO: Pod "client-containers-201b4617-36b2-4f2a-b781-d05950265877" satisfied condition "success or failure" May 24 21:50:39.996: INFO: Trying to get logs from node jerma-worker2 pod client-containers-201b4617-36b2-4f2a-b781-d05950265877 container test-container: STEP: delete the pod May 24 21:50:40.016: INFO: Waiting for pod client-containers-201b4617-36b2-4f2a-b781-d05950265877 to disappear May 24 21:50:40.018: INFO: Pod client-containers-201b4617-36b2-4f2a-b781-d05950265877 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:40.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5726" for this suite. • [SLOW TEST:6.461 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2259,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:40.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:40.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5849" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2281,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:40.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0ba655d1-53da-4022-bbc1-660a8d50e869 STEP: Creating a pod to test consume configMaps May 24 21:50:40.409: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc" in namespace "projected-511" to be "success or failure" May 24 21:50:40.447: INFO: Pod "pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.396763ms May 24 21:50:42.531: INFO: Pod "pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122283786s May 24 21:50:44.535: INFO: Pod "pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126695924s STEP: Saw pod success May 24 21:50:44.536: INFO: Pod "pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc" satisfied condition "success or failure" May 24 21:50:44.539: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc container projected-configmap-volume-test: STEP: delete the pod May 24 21:50:44.634: INFO: Waiting for pod pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc to disappear May 24 21:50:44.725: INFO: Pod pod-projected-configmaps-0142e5e1-d17d-4a06-a18a-6d50cf9508fc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:44.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-511" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2296,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:44.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 24 21:50:44.838: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 24 21:50:44.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6764' May 24 21:50:45.182: INFO: stderr: "" May 24 21:50:45.182: INFO: stdout: "service/agnhost-slave created\n" May 24 21:50:45.182: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 24 21:50:45.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6764' May 24 21:50:45.459: INFO: stderr: "" May 24 21:50:45.459: INFO: stdout: "service/agnhost-master created\n" May 24 21:50:45.460: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 24 21:50:45.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6764' May 24 21:50:45.811: INFO: stderr: "" May 24 21:50:45.811: INFO: stdout: "service/frontend created\n" May 24 21:50:45.811: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 24 21:50:45.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6764' May 24 21:50:46.067: INFO: stderr: "" May 24 21:50:46.067: INFO: stdout: "deployment.apps/frontend created\n" May 24 21:50:46.067: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 24 21:50:46.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6764' May 24 21:50:46.982: INFO: stderr: "" May 24 21:50:46.982: INFO: stdout: "deployment.apps/agnhost-master created\n" May 24 21:50:46.982: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 24 21:50:46.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6764' May 24 21:50:47.240: INFO: stderr: "" May 24 21:50:47.240: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 24 21:50:47.240: INFO: Waiting for all frontend pods to be Running. May 24 21:50:57.290: INFO: Waiting for frontend to serve content. May 24 21:50:57.302: INFO: Trying to add a new entry to the guestbook. May 24 21:50:57.320: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 24 21:50:57.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6764' May 24 21:50:57.466: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:50:57.466: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 24 21:50:57.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6764' May 24 21:50:57.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:50:57.644: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 24 21:50:57.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6764' May 24 21:50:57.769: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:50:57.769: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 21:50:57.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6764' May 24 21:50:57.877: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:50:57.877: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 21:50:57.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6764' May 24 21:50:57.972: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:50:57.972: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 24 21:50:57.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6764' May 24 21:50:58.080: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:50:58.080: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:50:58.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6764" for this suite. • [SLOW TEST:13.335 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":133,"skipped":2300,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:50:58.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:51:58.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8484" for this suite. • [SLOW TEST:60.075 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2302,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:51:58.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 21:51:58.213: INFO: Waiting up to 5m0s for pod "pod-c63e517b-1011-4ef7-9a1d-972095fbd700" in namespace "emptydir-7824" to be "success or failure" May 24 21:51:58.217: INFO: Pod "pod-c63e517b-1011-4ef7-9a1d-972095fbd700": Phase="Pending", Reason="", readiness=false. Elapsed: 3.723393ms May 24 21:52:00.256: INFO: Pod "pod-c63e517b-1011-4ef7-9a1d-972095fbd700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043157414s May 24 21:52:02.260: INFO: Pod "pod-c63e517b-1011-4ef7-9a1d-972095fbd700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047169447s STEP: Saw pod success May 24 21:52:02.261: INFO: Pod "pod-c63e517b-1011-4ef7-9a1d-972095fbd700" satisfied condition "success or failure" May 24 21:52:02.264: INFO: Trying to get logs from node jerma-worker pod pod-c63e517b-1011-4ef7-9a1d-972095fbd700 container test-container: STEP: delete the pod May 24 21:52:02.296: INFO: Waiting for pod pod-c63e517b-1011-4ef7-9a1d-972095fbd700 to disappear May 24 21:52:02.307: INFO: Pod pod-c63e517b-1011-4ef7-9a1d-972095fbd700 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:02.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7824" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2311,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:02.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:07.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3416" for this suite. • [SLOW TEST:5.145 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":136,"skipped":2314,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:07.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:52:07.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50" in namespace "projected-6484" to be "success or failure" May 24 21:52:07.552: INFO: Pod "downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50": Phase="Pending", Reason="", readiness=false. Elapsed: 37.449472ms May 24 21:52:09.568: INFO: Pod "downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053490158s May 24 21:52:11.586: INFO: Pod "downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071715986s STEP: Saw pod success May 24 21:52:11.586: INFO: Pod "downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50" satisfied condition "success or failure" May 24 21:52:11.589: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50 container client-container: STEP: delete the pod May 24 21:52:11.626: INFO: Waiting for pod downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50 to disappear May 24 21:52:11.634: INFO: Pod downwardapi-volume-8e757664-dfb7-4088-9320-71c806884a50 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:11.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6484" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:11.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:26.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4228" for this suite. STEP: Destroying namespace "nsdeletetest-5176" for this suite. May 24 21:52:26.876: INFO: Namespace nsdeletetest-5176 was already deleted STEP: Destroying namespace "nsdeletetest-5712" for this suite. • [SLOW TEST:15.238 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":138,"skipped":2339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:26.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 24 21:52:31.025: INFO: Pod pod-hostip-b8e1671a-32e8-48ae-8394-15bd44233701 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:31.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8535" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2364,"failed":0} SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:31.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-fbda70f4-3ab9-49d8-adc3-f44ec1945ce2 STEP: Creating secret with name secret-projected-all-test-volume-aca042ec-95b9-4727-8311-7f7df9074eed STEP: Creating a pod to test Check all projections for projected volume plugin May 24 21:52:31.164: INFO: Waiting up to 5m0s for pod "projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126" in namespace "projected-9915" to be "success or failure" May 24 21:52:31.167: INFO: Pod "projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769602ms May 24 21:52:33.171: INFO: Pod "projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00697028s May 24 21:52:35.176: INFO: Pod "projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011202777s STEP: Saw pod success May 24 21:52:35.176: INFO: Pod "projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126" satisfied condition "success or failure" May 24 21:52:35.178: INFO: Trying to get logs from node jerma-worker pod projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126 container projected-all-volume-test: STEP: delete the pod May 24 21:52:35.330: INFO: Waiting for pod projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126 to disappear May 24 21:52:35.341: INFO: Pod projected-volume-5f740984-9814-4f9b-b7f1-47fe2b16b126 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:35.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9915" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2366,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:35.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:52:35.948: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:52:38.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953956, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953956, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953956, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953955, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:52:40.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953956, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953956, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953956, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725953955, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:52:43.252: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:43.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7504" for this suite. STEP: Destroying namespace "webhook-7504-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.555 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":141,"skipped":2370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:43.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:52:44.009: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 21:52:46.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8400 create -f -' May 24 21:52:50.493: INFO: stderr: "" May 24 21:52:50.493: INFO: stdout: "e2e-test-crd-publish-openapi-7371-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 24 21:52:50.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8400 delete e2e-test-crd-publish-openapi-7371-crds test-cr' May 24 21:52:50.592: INFO: stderr: "" May 24 21:52:50.592: INFO: stdout: "e2e-test-crd-publish-openapi-7371-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 24 21:52:50.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8400 apply -f -' May 24 21:52:50.854: INFO: stderr: "" May 24 21:52:50.854: INFO: stdout: "e2e-test-crd-publish-openapi-7371-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 24 21:52:50.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8400 delete e2e-test-crd-publish-openapi-7371-crds test-cr' May 24 21:52:51.081: INFO: stderr: "" May 24 21:52:51.082: INFO: stdout: "e2e-test-crd-publish-openapi-7371-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 24 21:52:51.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7371-crds' May 24 21:52:51.306: INFO: stderr: "" May 24 21:52:51.306: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7371-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:52:53.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8400" for this suite. • [SLOW TEST:9.281 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":142,"skipped":2403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:52:53.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1398 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1398 STEP: creating replication controller externalsvc in namespace services-1398 I0524 21:52:53.419637 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1398, replica count: 2 I0524 21:52:56.470086 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:52:59.470283 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 24 21:52:59.507: INFO: Creating new exec pod May 24 21:53:03.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1398 execpoddf4zs -- /bin/sh -x -c nslookup clusterip-service' May 24 21:53:03.891: INFO: stderr: "I0524 21:53:03.702157 1684 log.go:172] (0xc0008d49a0) (0xc0009840a0) Create stream\nI0524 21:53:03.702212 1684 log.go:172] (0xc0008d49a0) (0xc0009840a0) Stream added, broadcasting: 1\nI0524 21:53:03.704318 1684 log.go:172] (0xc0008d49a0) Reply frame received for 1\nI0524 21:53:03.704348 1684 log.go:172] (0xc0008d49a0) (0xc00061c820) Create stream\nI0524 21:53:03.704356 1684 log.go:172] (0xc0008d49a0) (0xc00061c820) Stream added, broadcasting: 3\nI0524 21:53:03.705713 1684 log.go:172] (0xc0008d49a0) Reply frame received for 3\nI0524 21:53:03.705731 1684 log.go:172] (0xc0008d49a0) (0xc000984140) Create stream\nI0524 21:53:03.705737 1684 log.go:172] (0xc0008d49a0) (0xc000984140) Stream added, broadcasting: 5\nI0524 21:53:03.706445 1684 log.go:172] (0xc0008d49a0) Reply frame received for 5\nI0524 21:53:03.780867 1684 log.go:172] (0xc0008d49a0) Data frame received for 5\nI0524 21:53:03.780896 1684 log.go:172] (0xc000984140) (5) Data frame handling\nI0524 21:53:03.781022 1684 log.go:172] (0xc000984140) (5) Data frame sent\n+ nslookup clusterip-service\nI0524 21:53:03.880506 1684 log.go:172] (0xc0008d49a0) Data frame received for 3\nI0524 21:53:03.880529 1684 log.go:172] (0xc00061c820) (3) Data frame handling\nI0524 21:53:03.880547 1684 log.go:172] (0xc00061c820) (3) Data frame sent\nI0524 21:53:03.881622 1684 log.go:172] (0xc0008d49a0) Data frame received for 3\nI0524 21:53:03.881650 1684 log.go:172] (0xc00061c820) (3) Data frame handling\nI0524 21:53:03.881671 1684 log.go:172] (0xc00061c820) (3) Data frame sent\nI0524 21:53:03.882157 1684 log.go:172] (0xc0008d49a0) Data frame received for 3\nI0524 21:53:03.882213 1684 log.go:172] (0xc00061c820) (3) Data frame handling\nI0524 21:53:03.882268 1684 log.go:172] (0xc0008d49a0) Data frame received for 5\nI0524 21:53:03.882285 1684 log.go:172] (0xc000984140) (5) Data frame handling\nI0524 21:53:03.883728 1684 log.go:172] (0xc0008d49a0) Data frame received for 1\nI0524 21:53:03.883748 1684 log.go:172] (0xc0009840a0) (1) Data frame handling\nI0524 21:53:03.883757 1684 log.go:172] (0xc0009840a0) (1) Data frame sent\nI0524 21:53:03.883766 1684 log.go:172] (0xc0008d49a0) (0xc0009840a0) Stream removed, broadcasting: 1\nI0524 21:53:03.883778 1684 log.go:172] (0xc0008d49a0) Go away received\nI0524 21:53:03.884162 1684 log.go:172] (0xc0008d49a0) (0xc0009840a0) Stream removed, broadcasting: 1\nI0524 21:53:03.884191 1684 log.go:172] (0xc0008d49a0) (0xc00061c820) Stream removed, broadcasting: 3\nI0524 21:53:03.884212 1684 log.go:172] (0xc0008d49a0) (0xc000984140) Stream removed, broadcasting: 5\n" May 24 21:53:03.891: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1398.svc.cluster.local\tcanonical name = externalsvc.services-1398.svc.cluster.local.\nName:\texternalsvc.services-1398.svc.cluster.local\nAddress: 10.102.92.90\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1398, will wait for the garbage collector to delete the pods May 24 21:53:03.950: INFO: Deleting ReplicationController externalsvc took: 5.584442ms May 24 21:53:04.050: INFO: Terminating ReplicationController externalsvc pods took: 100.250569ms May 24 21:53:19.577: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:53:19.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1398" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.418 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":143,"skipped":2436,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:53:19.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 21:53:19.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1900' May 24 21:53:19.795: INFO: stderr: "" May 24 21:53:19.795: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 24 21:53:24.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1900 -o json' May 24 21:53:24.939: INFO: stderr: "" May 24 21:53:24.939: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-24T21:53:19Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1900\",\n \"resourceVersion\": \"18861616\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1900/pods/e2e-test-httpd-pod\",\n \"uid\": \"efc3df17-dde6-4a52-b27e-2419708bbaf0\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-r57kn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-r57kn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-r57kn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T21:53:19Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T21:53:22Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T21:53:22Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T21:53:19Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://ae88b50d1db2223fd3dbba450ace2c9a05a4a7f6cb08987bfb33f49c05509718\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-24T21:53:22Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.172\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.172\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-24T21:53:19Z\"\n }\n}\n" STEP: replace the image in the pod May 24 21:53:24.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1900' May 24 21:53:25.592: INFO: stderr: "" May 24 21:53:25.592: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 24 21:53:25.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1900' May 24 21:53:39.236: INFO: stderr: "" May 24 21:53:39.236: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:53:39.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1900" for this suite. • [SLOW TEST:19.645 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":144,"skipped":2448,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:53:39.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4534, will wait for the garbage collector to delete the pods May 24 21:53:43.394: INFO: Deleting Job.batch foo took: 8.244221ms May 24 21:53:43.694: INFO: Terminating Job.batch foo pods took: 300.293823ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:54:19.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4534" for this suite. • [SLOW TEST:40.257 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":145,"skipped":2458,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:54:19.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:54:19.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2225" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":146,"skipped":2466,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:54:19.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 24 21:54:24.238: INFO: Successfully updated pod "pod-update-3a36ba08-d183-4882-b10b-d90ce5978c2a" STEP: verifying the updated pod is in kubernetes May 24 21:54:24.253: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:54:24.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6009" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2485,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:54:24.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 24 21:54:24.362: INFO: Waiting up to 5m0s for pod "downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14" in namespace "downward-api-4437" to be "success or failure" May 24 21:54:24.364: INFO: Pod "downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124007ms May 24 21:54:26.384: INFO: Pod "downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021585315s May 24 21:54:28.516: INFO: Pod "downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15349406s STEP: Saw pod success May 24 21:54:28.516: INFO: Pod "downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14" satisfied condition "success or failure" May 24 21:54:28.526: INFO: Trying to get logs from node jerma-worker pod downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14 container dapi-container: STEP: delete the pod May 24 21:54:28.565: INFO: Waiting for pod downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14 to disappear May 24 21:54:28.604: INFO: Pod downward-api-4909f9b9-f800-4d26-bef0-f14d3b71db14 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:54:28.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4437" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2494,"failed":0} SSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:54:28.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:54:28.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6603" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":149,"skipped":2499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:54:28.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:54:28.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe" in namespace "downward-api-8951" to be "success or failure" May 24 21:54:28.928: INFO: Pod "downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe": Phase="Pending", Reason="", readiness=false. Elapsed: 13.019302ms May 24 21:54:30.959: INFO: Pod "downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04432921s May 24 21:54:32.962: INFO: Pod "downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047822335s STEP: Saw pod success May 24 21:54:32.963: INFO: Pod "downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe" satisfied condition "success or failure" May 24 21:54:32.965: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe container client-container: STEP: delete the pod May 24 21:54:33.018: INFO: Waiting for pod downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe to disappear May 24 21:54:33.021: INFO: Pod downwardapi-volume-33cad07a-cac0-4662-bc57-df59da1bd0fe no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:54:33.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8951" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:54:33.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 24 21:54:33.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8719' May 24 21:54:33.339: INFO: stderr: "" May 24 21:54:33.339: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 21:54:33.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8719' May 24 21:54:33.462: INFO: stderr: "" May 24 21:54:33.462: INFO: stdout: "update-demo-nautilus-fcbb6 update-demo-nautilus-xrckr " May 24 21:54:33.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fcbb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:33.552: INFO: stderr: "" May 24 21:54:33.552: INFO: stdout: "" May 24 21:54:33.552: INFO: update-demo-nautilus-fcbb6 is created but not running May 24 21:54:38.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8719' May 24 21:54:38.655: INFO: stderr: "" May 24 21:54:38.655: INFO: stdout: "update-demo-nautilus-fcbb6 update-demo-nautilus-xrckr " May 24 21:54:38.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fcbb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:38.740: INFO: stderr: "" May 24 21:54:38.740: INFO: stdout: "true" May 24 21:54:38.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fcbb6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:38.831: INFO: stderr: "" May 24 21:54:38.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:54:38.831: INFO: validating pod update-demo-nautilus-fcbb6 May 24 21:54:38.835: INFO: got data: { "image": "nautilus.jpg" } May 24 21:54:38.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:54:38.835: INFO: update-demo-nautilus-fcbb6 is verified up and running May 24 21:54:38.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrckr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:38.934: INFO: stderr: "" May 24 21:54:38.934: INFO: stdout: "true" May 24 21:54:38.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrckr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:39.036: INFO: stderr: "" May 24 21:54:39.036: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:54:39.036: INFO: validating pod update-demo-nautilus-xrckr May 24 21:54:39.040: INFO: got data: { "image": "nautilus.jpg" } May 24 21:54:39.040: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:54:39.040: INFO: update-demo-nautilus-xrckr is verified up and running STEP: scaling down the replication controller May 24 21:54:39.043: INFO: scanned /root for discovery docs: May 24 21:54:39.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8719' May 24 21:54:40.187: INFO: stderr: "" May 24 21:54:40.187: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 21:54:40.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8719' May 24 21:54:40.277: INFO: stderr: "" May 24 21:54:40.277: INFO: stdout: "update-demo-nautilus-fcbb6 update-demo-nautilus-xrckr " STEP: Replicas for name=update-demo: expected=1 actual=2 May 24 21:54:45.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8719' May 24 21:54:45.387: INFO: stderr: "" May 24 21:54:45.387: INFO: stdout: "update-demo-nautilus-fcbb6 update-demo-nautilus-xrckr " STEP: Replicas for name=update-demo: expected=1 actual=2 May 24 21:54:50.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8719' May 24 21:54:50.497: INFO: stderr: "" May 24 21:54:50.497: INFO: stdout: "update-demo-nautilus-xrckr " May 24 21:54:50.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrckr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:50.588: INFO: stderr: "" May 24 21:54:50.588: INFO: stdout: "true" May 24 21:54:50.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrckr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:50.682: INFO: stderr: "" May 24 21:54:50.682: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:54:50.682: INFO: validating pod update-demo-nautilus-xrckr May 24 21:54:50.685: INFO: got data: { "image": "nautilus.jpg" } May 24 21:54:50.685: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:54:50.685: INFO: update-demo-nautilus-xrckr is verified up and running STEP: scaling up the replication controller May 24 21:54:50.688: INFO: scanned /root for discovery docs: May 24 21:54:50.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8719' May 24 21:54:51.817: INFO: stderr: "" May 24 21:54:51.817: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 21:54:51.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8719' May 24 21:54:51.912: INFO: stderr: "" May 24 21:54:51.912: INFO: stdout: "update-demo-nautilus-tpcjj update-demo-nautilus-xrckr " May 24 21:54:51.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpcjj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:51.998: INFO: stderr: "" May 24 21:54:51.998: INFO: stdout: "" May 24 21:54:51.998: INFO: update-demo-nautilus-tpcjj is created but not running May 24 21:54:56.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8719' May 24 21:54:57.094: INFO: stderr: "" May 24 21:54:57.094: INFO: stdout: "update-demo-nautilus-tpcjj update-demo-nautilus-xrckr " May 24 21:54:57.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpcjj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:57.207: INFO: stderr: "" May 24 21:54:57.208: INFO: stdout: "true" May 24 21:54:57.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpcjj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:57.311: INFO: stderr: "" May 24 21:54:57.311: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:54:57.311: INFO: validating pod update-demo-nautilus-tpcjj May 24 21:54:57.315: INFO: got data: { "image": "nautilus.jpg" } May 24 21:54:57.315: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:54:57.315: INFO: update-demo-nautilus-tpcjj is verified up and running May 24 21:54:57.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrckr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:57.409: INFO: stderr: "" May 24 21:54:57.409: INFO: stdout: "true" May 24 21:54:57.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrckr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8719' May 24 21:54:57.492: INFO: stderr: "" May 24 21:54:57.492: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 21:54:57.492: INFO: validating pod update-demo-nautilus-xrckr May 24 21:54:57.495: INFO: got data: { "image": "nautilus.jpg" } May 24 21:54:57.495: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 21:54:57.495: INFO: update-demo-nautilus-xrckr is verified up and running STEP: using delete to clean up resources May 24 21:54:57.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8719' May 24 21:54:57.596: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 21:54:57.596: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 24 21:54:57.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8719' May 24 21:54:57.685: INFO: stderr: "No resources found in kubectl-8719 namespace.\n" May 24 21:54:57.685: INFO: stdout: "" May 24 21:54:57.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8719 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 21:54:57.776: INFO: stderr: "" May 24 21:54:57.776: INFO: stdout: "update-demo-nautilus-tpcjj\nupdate-demo-nautilus-xrckr\n" May 24 21:54:58.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8719' May 24 21:54:58.373: INFO: stderr: "No resources found in kubectl-8719 namespace.\n" May 24 21:54:58.373: INFO: stdout: "" May 24 21:54:58.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8719 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 21:54:58.465: INFO: stderr: "" May 24 21:54:58.465: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:54:58.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8719" for this suite. • [SLOW TEST:25.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":151,"skipped":2638,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:54:58.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8d59b1fb-1656-49b1-8d29-eaa03c656d05 STEP: Creating a pod to test consume configMaps May 24 21:54:58.767: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed" in namespace "projected-7338" to be "success or failure" May 24 21:54:58.771: INFO: Pod "pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400807ms May 24 21:55:00.774: INFO: Pod "pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007022211s May 24 21:55:02.779: INFO: Pod "pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01173674s STEP: Saw pod success May 24 21:55:02.779: INFO: Pod "pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed" satisfied condition "success or failure" May 24 21:55:02.783: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed container projected-configmap-volume-test: STEP: delete the pod May 24 21:55:02.803: INFO: Waiting for pod pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed to disappear May 24 21:55:02.808: INFO: Pod pod-projected-configmaps-c59552a3-4fdc-4eb1-9353-8ec103bad4ed no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:55:02.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7338" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2647,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:55:02.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:55:02.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af" in namespace "downward-api-5464" to be "success or failure" May 24 21:55:02.934: INFO: Pod "downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af": Phase="Pending", Reason="", readiness=false. Elapsed: 22.623462ms May 24 21:55:04.938: INFO: Pod "downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026318868s May 24 21:55:06.941: INFO: Pod "downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030131659s STEP: Saw pod success May 24 21:55:06.941: INFO: Pod "downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af" satisfied condition "success or failure" May 24 21:55:06.944: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af container client-container: STEP: delete the pod May 24 21:55:06.958: INFO: Waiting for pod downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af to disappear May 24 21:55:07.000: INFO: Pod downwardapi-volume-afc5de66-e9ff-4d26-9847-af6d6edf91af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:55:07.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5464" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:55:07.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:55:07.162: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 24 21:55:12.168: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 21:55:12.168: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 24 21:55:12.205: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6226 /apis/apps/v1/namespaces/deployment-6226/deployments/test-cleanup-deployment 233feb69-af16-4fbb-a4a4-9a58d2f0134a 18862271 1 2020-05-24 21:55:12 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004799b38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 24 21:55:12.235: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-6226 /apis/apps/v1/namespaces/deployment-6226/replicasets/test-cleanup-deployment-55ffc6b7b6 d9928f5b-45b6-4714-b9b4-76bf2fe6ce04 18862273 1 2020-05-24 21:55:12 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 233feb69-af16-4fbb-a4a4-9a58d2f0134a 0xc002ebc057 0xc002ebc058}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ebc0c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 21:55:12.235: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 24 21:55:12.235: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6226 /apis/apps/v1/namespaces/deployment-6226/replicasets/test-cleanup-controller a6d97dbf-0c96-41c0-86e1-f6167e07cce8 18862272 1 2020-05-24 21:55:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 233feb69-af16-4fbb-a4a4-9a58d2f0134a 0xc004799f87 0xc004799f88}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004799fe8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 24 21:55:12.283: INFO: Pod "test-cleanup-controller-zzjfr" is available: &Pod{ObjectMeta:{test-cleanup-controller-zzjfr test-cleanup-controller- deployment-6226 /api/v1/namespaces/deployment-6226/pods/test-cleanup-controller-zzjfr 11ad93d7-ed23-47d6-80ed-8ea3c5e6e244 18862258 0 2020-05-24 21:55:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller a6d97dbf-0c96-41c0-86e1-f6167e07cce8 0xc002ebc4f7 0xc002ebc4f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-27mmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-27mmz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-27mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:55:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:55:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:55:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:55:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.229,StartTime:2020-05-24 21:55:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:55:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7af656a2b39e533885f32dc3023cb906ddccf5059bd68e12830f0ce6157856eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 21:55:12.283: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-6458m" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-6458m test-cleanup-deployment-55ffc6b7b6- deployment-6226 /api/v1/namespaces/deployment-6226/pods/test-cleanup-deployment-55ffc6b7b6-6458m b98b97c6-1c92-428d-be72-72669fc137ac 18862279 0 2020-05-24 21:55:12 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 d9928f5b-45b6-4714-b9b4-76bf2fe6ce04 0xc002ebc687 0xc002ebc688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-27mmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-27mmz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-27mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:55:12.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6226" for this suite. • [SLOW TEST:5.325 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":154,"skipped":2685,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:55:12.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:55:38.416: INFO: Container started at 2020-05-24 21:55:15 +0000 UTC, pod became ready at 2020-05-24 21:55:38 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:55:38.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5252" for this suite. • [SLOW TEST:26.090 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2688,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:55:38.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:55:38.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975" in namespace "projected-8746" to be "success or failure" May 24 21:55:38.510: INFO: Pod "downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975": Phase="Pending", Reason="", readiness=false. Elapsed: 8.47572ms May 24 21:55:40.535: INFO: Pod "downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033324791s May 24 21:55:42.539: INFO: Pod "downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037447012s STEP: Saw pod success May 24 21:55:42.539: INFO: Pod "downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975" satisfied condition "success or failure" May 24 21:55:42.542: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975 container client-container: STEP: delete the pod May 24 21:55:42.589: INFO: Waiting for pod downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975 to disappear May 24 21:55:42.604: INFO: Pod downwardapi-volume-707eb184-2b85-4866-95a3-2ebe4f8ed975 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:55:42.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8746" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2690,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:55:42.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:55:43.175: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:55:45.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:55:48.248: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:55:48.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4456" for this suite. STEP: Destroying namespace "webhook-4456-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.884 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":157,"skipped":2693,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:55:48.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 24 21:55:48.572: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-802 /api/v1/namespaces/watch-802/configmaps/e2e-watch-test-resource-version 77280ca0-0270-412c-821d-cdaee0317719 18862529 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 21:55:48.572: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-802 /api/v1/namespaces/watch-802/configmaps/e2e-watch-test-resource-version 77280ca0-0270-412c-821d-cdaee0317719 18862530 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:55:48.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-802" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":158,"skipped":2704,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:55:48.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 24 21:55:48.779: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862539 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 21:55:48.779: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862539 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 24 21:55:58.788: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862591 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 24 21:55:58.788: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862591 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 24 21:56:08.797: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862623 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 21:56:08.797: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862623 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 24 21:56:18.804: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862653 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 21:56:18.804: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a f289d2f1-70ac-4f2d-a5d7-8b052f8d0fc6 18862653 0 2020-05-24 21:55:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 24 21:56:28.812: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b f60b0083-699f-40d6-952f-d3ac50f3e2bf 18862683 0 2020-05-24 21:56:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 21:56:28.812: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b f60b0083-699f-40d6-952f-d3ac50f3e2bf 18862683 0 2020-05-24 21:56:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 24 21:56:38.820: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b f60b0083-699f-40d6-952f-d3ac50f3e2bf 18862713 0 2020-05-24 21:56:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 21:56:38.820: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b f60b0083-699f-40d6-952f-d3ac50f3e2bf 18862713 0 2020-05-24 21:56:28 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:56:48.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4549" for this suite. • [SLOW TEST:60.235 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":159,"skipped":2738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:56:48.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 21:56:48.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1" in namespace "projected-2481" to be "success or failure" May 24 21:56:48.958: INFO: Pod "downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.091498ms May 24 21:56:50.997: INFO: Pod "downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058499237s May 24 21:56:53.001: INFO: Pod "downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063101883s STEP: Saw pod success May 24 21:56:53.001: INFO: Pod "downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1" satisfied condition "success or failure" May 24 21:56:53.004: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1 container client-container: STEP: delete the pod May 24 21:56:53.050: INFO: Waiting for pod downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1 to disappear May 24 21:56:53.062: INFO: Pod downwardapi-volume-26b40ee6-1e70-4e7c-8139-e6be1096fff1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:56:53.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2481" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:56:53.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-587aaa05-5c81-48e9-92cb-4d66078d466e STEP: Creating secret with name s-test-opt-upd-0863d92b-fde1-43cc-a3f4-bf5f1180e1c9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-587aaa05-5c81-48e9-92cb-4d66078d466e STEP: Updating secret s-test-opt-upd-0863d92b-fde1-43cc-a3f4-bf5f1180e1c9 STEP: Creating secret with name s-test-opt-create-7206ee22-64a7-4fd5-9618-e4f8ea150901 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:57:01.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9385" for this suite. • [SLOW TEST:8.258 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2833,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:57:01.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 21:57:01.758: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 21:57:03.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954221, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954221, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954221, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954221, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 21:57:06.814: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:57:06.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9994" for this suite. STEP: Destroying namespace "webhook-9994-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.731 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":162,"skipped":2863,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:57:07.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:57:11.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2188" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2868,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:57:11.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:57:11.584: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 24 21:57:16.587: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 21:57:16.587: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 24 21:57:18.591: INFO: Creating deployment "test-rollover-deployment" May 24 21:57:18.624: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 24 21:57:20.631: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 24 21:57:20.636: INFO: Ensure that both replica sets have 1 created replica May 24 21:57:20.640: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 24 21:57:20.645: INFO: Updating deployment test-rollover-deployment May 24 21:57:20.645: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 24 21:57:22.660: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 24 21:57:22.666: INFO: Make sure deployment "test-rollover-deployment" is complete May 24 21:57:22.671: INFO: all replica sets need to contain the pod-template-hash label May 24 21:57:22.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:57:24.678: INFO: all replica sets need to contain the pod-template-hash label May 24 21:57:24.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954244, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:57:26.678: INFO: all replica sets need to contain the pod-template-hash label May 24 21:57:26.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954244, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:57:28.686: INFO: all replica sets need to contain the pod-template-hash label May 24 21:57:28.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954244, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:57:30.681: INFO: all replica sets need to contain the pod-template-hash label May 24 21:57:30.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954244, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:57:32.683: INFO: all replica sets need to contain the pod-template-hash label May 24 21:57:32.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954244, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954238, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:57:34.835: INFO: May 24 21:57:34.835: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 24 21:57:35.075: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1764 /apis/apps/v1/namespaces/deployment-1764/deployments/test-rollover-deployment 68add0d4-edea-414a-ba68-f9dd32fa5487 18863113 2 2020-05-24 21:57:18 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0024e5f68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-24 21:57:18 +0000 UTC,LastTransitionTime:2020-05-24 21:57:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-24 21:57:34 +0000 UTC,LastTransitionTime:2020-05-24 21:57:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 24 21:57:35.078: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-1764 /apis/apps/v1/namespaces/deployment-1764/replicasets/test-rollover-deployment-574d6dfbff 34570206-39e1-4a4c-be7e-1fac1e477379 18863102 2 2020-05-24 21:57:20 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 68add0d4-edea-414a-ba68-f9dd32fa5487 0xc002856a47 0xc002856a48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002856c68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 24 21:57:35.078: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 24 21:57:35.078: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1764 /apis/apps/v1/namespaces/deployment-1764/replicasets/test-rollover-controller 1b503e28-235f-4495-9863-b623c75a198d 18863111 2 2020-05-24 21:57:11 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 68add0d4-edea-414a-ba68-f9dd32fa5487 0xc002856707 0xc002856708}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002856878 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 21:57:35.078: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1764 /apis/apps/v1/namespaces/deployment-1764/replicasets/test-rollover-deployment-f6c94f66c 8e195f94-9536-4297-84bf-9b4f36d9a997 18863049 2 2020-05-24 21:57:18 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 68add0d4-edea-414a-ba68-f9dd32fa5487 0xc0028572e0 0xc0028572e1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002857768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 21:57:35.081: INFO: Pod "test-rollover-deployment-574d6dfbff-cpg9b" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-cpg9b test-rollover-deployment-574d6dfbff- deployment-1764 /api/v1/namespaces/deployment-1764/pods/test-rollover-deployment-574d6dfbff-cpg9b f3747240-ed01-4449-a00a-61d6e8967334 18863066 0 2020-05-24 21:57:20 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 34570206-39e1-4a4c-be7e-1fac1e477379 0xc000e9e597 0xc000e9e598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7wd5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7wd5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7wd5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:57:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:57:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:57:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:57:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.183,StartTime:2020-05-24 21:57:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:57:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9c6bd71dfa7f33e377e88bfbef56e00e7dc121eb33a01951337c8e0437d2c852,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:57:35.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1764" for this suite. • [SLOW TEST:23.633 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":164,"skipped":2870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:57:35.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 24 21:57:35.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9624 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 24 21:57:38.640: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0524 21:57:38.570484 2354 log.go:172] (0xc00090c160) (0xc000619d60) Create stream\nI0524 21:57:38.570558 2354 log.go:172] (0xc00090c160) (0xc000619d60) Stream added, broadcasting: 1\nI0524 21:57:38.573400 2354 log.go:172] (0xc00090c160) Reply frame received for 1\nI0524 21:57:38.573455 2354 log.go:172] (0xc00090c160) (0xc000619e00) Create stream\nI0524 21:57:38.573467 2354 log.go:172] (0xc00090c160) (0xc000619e00) Stream added, broadcasting: 3\nI0524 21:57:38.574451 2354 log.go:172] (0xc00090c160) Reply frame received for 3\nI0524 21:57:38.574497 2354 log.go:172] (0xc00090c160) (0xc000838000) Create stream\nI0524 21:57:38.574511 2354 log.go:172] (0xc00090c160) (0xc000838000) Stream added, broadcasting: 5\nI0524 21:57:38.575384 2354 log.go:172] (0xc00090c160) Reply frame received for 5\nI0524 21:57:38.575441 2354 log.go:172] (0xc00090c160) (0xc000619ea0) Create stream\nI0524 21:57:38.575469 2354 log.go:172] (0xc00090c160) (0xc000619ea0) Stream added, broadcasting: 7\nI0524 21:57:38.576332 2354 log.go:172] (0xc00090c160) Reply frame received for 7\nI0524 21:57:38.576503 2354 log.go:172] (0xc000619e00) (3) Writing data frame\nI0524 21:57:38.576631 2354 log.go:172] (0xc000619e00) (3) Writing data frame\nI0524 21:57:38.577847 2354 log.go:172] (0xc00090c160) Data frame received for 5\nI0524 21:57:38.577874 2354 log.go:172] (0xc000838000) (5) Data frame handling\nI0524 21:57:38.577890 2354 log.go:172] (0xc000838000) (5) Data frame sent\nI0524 21:57:38.578438 2354 log.go:172] (0xc00090c160) Data frame received for 5\nI0524 21:57:38.578475 2354 log.go:172] (0xc000838000) (5) Data frame handling\nI0524 21:57:38.578508 2354 log.go:172] (0xc000838000) (5) Data frame sent\nI0524 21:57:38.615376 2354 log.go:172] (0xc00090c160) Data frame received for 7\nI0524 21:57:38.615430 2354 log.go:172] (0xc000619ea0) (7) Data frame handling\nI0524 21:57:38.615471 2354 log.go:172] (0xc00090c160) Data frame received for 5\nI0524 21:57:38.615514 2354 log.go:172] (0xc000838000) (5) Data frame handling\nI0524 21:57:38.615576 2354 log.go:172] (0xc00090c160) Data frame received for 1\nI0524 21:57:38.615617 2354 log.go:172] (0xc000619d60) (1) Data frame handling\nI0524 21:57:38.615647 2354 log.go:172] (0xc000619d60) (1) Data frame sent\nI0524 21:57:38.615693 2354 log.go:172] (0xc00090c160) (0xc000619e00) Stream removed, broadcasting: 3\nI0524 21:57:38.615736 2354 log.go:172] (0xc00090c160) (0xc000619d60) Stream removed, broadcasting: 1\nI0524 21:57:38.615760 2354 log.go:172] (0xc00090c160) Go away received\nI0524 21:57:38.616374 2354 log.go:172] (0xc00090c160) (0xc000619d60) Stream removed, broadcasting: 1\nI0524 21:57:38.616414 2354 log.go:172] (0xc00090c160) (0xc000619e00) Stream removed, broadcasting: 3\nI0524 21:57:38.616440 2354 log.go:172] (0xc00090c160) (0xc000838000) Stream removed, broadcasting: 5\nI0524 21:57:38.616469 2354 log.go:172] (0xc00090c160) (0xc000619ea0) Stream removed, broadcasting: 7\n" May 24 21:57:38.640: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:57:40.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9624" for this suite. • [SLOW TEST:5.564 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":165,"skipped":2904,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:57:40.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7868 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 21:57:40.747: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 21:58:03.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.186:8080/dial?request=hostname&protocol=http&host=10.244.1.185&port=8080&tries=1'] Namespace:pod-network-test-7868 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:58:03.031: INFO: >>> kubeConfig: /root/.kube/config I0524 21:58:03.063314 6 log.go:172] (0xc0016cc6e0) (0xc0015394a0) Create stream I0524 21:58:03.063345 6 log.go:172] (0xc0016cc6e0) (0xc0015394a0) Stream added, broadcasting: 1 I0524 21:58:03.065590 6 log.go:172] (0xc0016cc6e0) Reply frame received for 1 I0524 21:58:03.065633 6 log.go:172] (0xc0016cc6e0) (0xc000d12fa0) Create stream I0524 21:58:03.065648 6 log.go:172] (0xc0016cc6e0) (0xc000d12fa0) Stream added, broadcasting: 3 I0524 21:58:03.066681 6 log.go:172] (0xc0016cc6e0) Reply frame received for 3 I0524 21:58:03.066720 6 log.go:172] (0xc0016cc6e0) (0xc000d13180) Create stream I0524 21:58:03.066737 6 log.go:172] (0xc0016cc6e0) (0xc000d13180) Stream added, broadcasting: 5 I0524 21:58:03.067745 6 log.go:172] (0xc0016cc6e0) Reply frame received for 5 I0524 21:58:03.176083 6 log.go:172] (0xc0016cc6e0) Data frame received for 3 I0524 21:58:03.176137 6 log.go:172] (0xc000d12fa0) (3) Data frame handling I0524 21:58:03.176189 6 log.go:172] (0xc000d12fa0) (3) Data frame sent I0524 21:58:03.176542 6 log.go:172] (0xc0016cc6e0) Data frame received for 5 I0524 21:58:03.176568 6 log.go:172] (0xc000d13180) (5) Data frame handling I0524 21:58:03.176598 6 log.go:172] (0xc0016cc6e0) Data frame received for 3 I0524 21:58:03.176613 6 log.go:172] (0xc000d12fa0) (3) Data frame handling I0524 21:58:03.179051 6 log.go:172] (0xc0016cc6e0) Data frame received for 1 I0524 21:58:03.179130 6 log.go:172] (0xc0015394a0) (1) Data frame handling I0524 21:58:03.179214 6 log.go:172] (0xc0015394a0) (1) Data frame sent I0524 21:58:03.179290 6 log.go:172] (0xc0016cc6e0) (0xc0015394a0) Stream removed, broadcasting: 1 I0524 21:58:03.179339 6 log.go:172] (0xc0016cc6e0) Go away received I0524 21:58:03.179391 6 log.go:172] (0xc0016cc6e0) (0xc0015394a0) Stream removed, broadcasting: 1 I0524 21:58:03.179421 6 log.go:172] (0xc0016cc6e0) (0xc000d12fa0) Stream removed, broadcasting: 3 I0524 21:58:03.179457 6 log.go:172] (0xc0016cc6e0) (0xc000d13180) Stream removed, broadcasting: 5 May 24 21:58:03.179: INFO: Waiting for responses: map[] May 24 21:58:03.183: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.186:8080/dial?request=hostname&protocol=http&host=10.244.2.237&port=8080&tries=1'] Namespace:pod-network-test-7868 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 21:58:03.183: INFO: >>> kubeConfig: /root/.kube/config I0524 21:58:03.218270 6 log.go:172] (0xc0017069a0) (0xc001aa8960) Create stream I0524 21:58:03.218315 6 log.go:172] (0xc0017069a0) (0xc001aa8960) Stream added, broadcasting: 1 I0524 21:58:03.220121 6 log.go:172] (0xc0017069a0) Reply frame received for 1 I0524 21:58:03.220163 6 log.go:172] (0xc0017069a0) (0xc0011c66e0) Create stream I0524 21:58:03.220173 6 log.go:172] (0xc0017069a0) (0xc0011c66e0) Stream added, broadcasting: 3 I0524 21:58:03.220945 6 log.go:172] (0xc0017069a0) Reply frame received for 3 I0524 21:58:03.220964 6 log.go:172] (0xc0017069a0) (0xc0011c6be0) Create stream I0524 21:58:03.220972 6 log.go:172] (0xc0017069a0) (0xc0011c6be0) Stream added, broadcasting: 5 I0524 21:58:03.221821 6 log.go:172] (0xc0017069a0) Reply frame received for 5 I0524 21:58:03.290834 6 log.go:172] (0xc0017069a0) Data frame received for 3 I0524 21:58:03.290868 6 log.go:172] (0xc0011c66e0) (3) Data frame handling I0524 21:58:03.290888 6 log.go:172] (0xc0011c66e0) (3) Data frame sent I0524 21:58:03.291666 6 log.go:172] (0xc0017069a0) Data frame received for 3 I0524 21:58:03.291704 6 log.go:172] (0xc0011c66e0) (3) Data frame handling I0524 21:58:03.291967 6 log.go:172] (0xc0017069a0) Data frame received for 5 I0524 21:58:03.291985 6 log.go:172] (0xc0011c6be0) (5) Data frame handling I0524 21:58:03.293686 6 log.go:172] (0xc0017069a0) Data frame received for 1 I0524 21:58:03.293721 6 log.go:172] (0xc001aa8960) (1) Data frame handling I0524 21:58:03.293750 6 log.go:172] (0xc001aa8960) (1) Data frame sent I0524 21:58:03.293783 6 log.go:172] (0xc0017069a0) (0xc001aa8960) Stream removed, broadcasting: 1 I0524 21:58:03.293884 6 log.go:172] (0xc0017069a0) (0xc001aa8960) Stream removed, broadcasting: 1 I0524 21:58:03.293897 6 log.go:172] (0xc0017069a0) (0xc0011c66e0) Stream removed, broadcasting: 3 I0524 21:58:03.293967 6 log.go:172] (0xc0017069a0) Go away received I0524 21:58:03.293988 6 log.go:172] (0xc0017069a0) (0xc0011c6be0) Stream removed, broadcasting: 5 May 24 21:58:03.294: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:58:03.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7868" for this suite. • [SLOW TEST:22.651 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2907,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:58:03.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:58:19.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1767" for this suite. • [SLOW TEST:16.313 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":167,"skipped":2910,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:58:19.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8058 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8058 I0524 21:58:19.867095 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8058, replica count: 2 I0524 21:58:22.917569 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 21:58:25.917824 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 21:58:25.917: INFO: Creating new exec pod May 24 21:58:30.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8058 execpodfsv6q -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 24 21:58:31.197: INFO: stderr: "I0524 21:58:31.068136 2379 log.go:172] (0xc0006369a0) (0xc00093a000) Create stream\nI0524 21:58:31.068208 2379 log.go:172] (0xc0006369a0) (0xc00093a000) Stream added, broadcasting: 1\nI0524 21:58:31.071016 2379 log.go:172] (0xc0006369a0) Reply frame received for 1\nI0524 21:58:31.071072 2379 log.go:172] (0xc0006369a0) (0xc000603ae0) Create stream\nI0524 21:58:31.071092 2379 log.go:172] (0xc0006369a0) (0xc000603ae0) Stream added, broadcasting: 3\nI0524 21:58:31.071896 2379 log.go:172] (0xc0006369a0) Reply frame received for 3\nI0524 21:58:31.071937 2379 log.go:172] (0xc0006369a0) (0xc00093a0a0) Create stream\nI0524 21:58:31.071950 2379 log.go:172] (0xc0006369a0) (0xc00093a0a0) Stream added, broadcasting: 5\nI0524 21:58:31.072685 2379 log.go:172] (0xc0006369a0) Reply frame received for 5\nI0524 21:58:31.179250 2379 log.go:172] (0xc0006369a0) Data frame received for 5\nI0524 21:58:31.179276 2379 log.go:172] (0xc00093a0a0) (5) Data frame handling\nI0524 21:58:31.179289 2379 log.go:172] (0xc00093a0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0524 21:58:31.188852 2379 log.go:172] (0xc0006369a0) Data frame received for 5\nI0524 21:58:31.188884 2379 log.go:172] (0xc00093a0a0) (5) Data frame handling\nI0524 21:58:31.188903 2379 log.go:172] (0xc00093a0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0524 21:58:31.189291 2379 log.go:172] (0xc0006369a0) Data frame received for 3\nI0524 21:58:31.189326 2379 log.go:172] (0xc000603ae0) (3) Data frame handling\nI0524 21:58:31.189638 2379 log.go:172] (0xc0006369a0) Data frame received for 5\nI0524 21:58:31.189661 2379 log.go:172] (0xc00093a0a0) (5) Data frame handling\nI0524 21:58:31.191117 2379 log.go:172] (0xc0006369a0) Data frame received for 1\nI0524 21:58:31.191148 2379 log.go:172] (0xc00093a000) (1) Data frame handling\nI0524 21:58:31.191179 2379 log.go:172] (0xc00093a000) (1) Data frame sent\nI0524 21:58:31.191216 2379 log.go:172] (0xc0006369a0) (0xc00093a000) Stream removed, broadcasting: 1\nI0524 21:58:31.191243 2379 log.go:172] (0xc0006369a0) Go away received\nI0524 21:58:31.191713 2379 log.go:172] (0xc0006369a0) (0xc00093a000) Stream removed, broadcasting: 1\nI0524 21:58:31.191746 2379 log.go:172] (0xc0006369a0) (0xc000603ae0) Stream removed, broadcasting: 3\nI0524 21:58:31.191757 2379 log.go:172] (0xc0006369a0) (0xc00093a0a0) Stream removed, broadcasting: 5\n" May 24 21:58:31.197: INFO: stdout: "" May 24 21:58:31.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8058 execpodfsv6q -- /bin/sh -x -c nc -zv -t -w 2 10.100.213.112 80' May 24 21:58:31.389: INFO: stderr: "I0524 21:58:31.318957 2401 log.go:172] (0xc0001154a0) (0xc0006fbf40) Create stream\nI0524 21:58:31.319004 2401 log.go:172] (0xc0001154a0) (0xc0006fbf40) Stream added, broadcasting: 1\nI0524 21:58:31.321596 2401 log.go:172] (0xc0001154a0) Reply frame received for 1\nI0524 21:58:31.321629 2401 log.go:172] (0xc0001154a0) (0xc0004495e0) Create stream\nI0524 21:58:31.321636 2401 log.go:172] (0xc0001154a0) (0xc0004495e0) Stream added, broadcasting: 3\nI0524 21:58:31.322466 2401 log.go:172] (0xc0001154a0) Reply frame received for 3\nI0524 21:58:31.322489 2401 log.go:172] (0xc0001154a0) (0xc0008ee000) Create stream\nI0524 21:58:31.322499 2401 log.go:172] (0xc0001154a0) (0xc0008ee000) Stream added, broadcasting: 5\nI0524 21:58:31.323229 2401 log.go:172] (0xc0001154a0) Reply frame received for 5\nI0524 21:58:31.383213 2401 log.go:172] (0xc0001154a0) Data frame received for 3\nI0524 21:58:31.383241 2401 log.go:172] (0xc0004495e0) (3) Data frame handling\nI0524 21:58:31.383431 2401 log.go:172] (0xc0001154a0) Data frame received for 5\nI0524 21:58:31.383447 2401 log.go:172] (0xc0008ee000) (5) Data frame handling\nI0524 21:58:31.383466 2401 log.go:172] (0xc0008ee000) (5) Data frame sent\nI0524 21:58:31.383477 2401 log.go:172] (0xc0001154a0) Data frame received for 5\nI0524 21:58:31.383482 2401 log.go:172] (0xc0008ee000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.213.112 80\nConnection to 10.100.213.112 80 port [tcp/http] succeeded!\nI0524 21:58:31.384800 2401 log.go:172] (0xc0001154a0) Data frame received for 1\nI0524 21:58:31.384818 2401 log.go:172] (0xc0006fbf40) (1) Data frame handling\nI0524 21:58:31.384827 2401 log.go:172] (0xc0006fbf40) (1) Data frame sent\nI0524 21:58:31.384839 2401 log.go:172] (0xc0001154a0) (0xc0006fbf40) Stream removed, broadcasting: 1\nI0524 21:58:31.384854 2401 log.go:172] (0xc0001154a0) Go away received\nI0524 21:58:31.385385 2401 log.go:172] (0xc0001154a0) (0xc0006fbf40) Stream removed, broadcasting: 1\nI0524 21:58:31.385420 2401 log.go:172] (0xc0001154a0) (0xc0004495e0) Stream removed, broadcasting: 3\nI0524 21:58:31.385440 2401 log.go:172] (0xc0001154a0) (0xc0008ee000) Stream removed, broadcasting: 5\n" May 24 21:58:31.389: INFO: stdout: "" May 24 21:58:31.389: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:58:31.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8058" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.854 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":168,"skipped":2915,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:58:31.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 24 21:58:31.553: INFO: >>> kubeConfig: /root/.kube/config May 24 21:58:34.488: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:58:43.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7692" for this suite. • [SLOW TEST:12.536 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":169,"skipped":2916,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:58:44.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 24 21:58:44.030: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 21:58:44.060: INFO: Waiting for terminating namespaces to be deleted... May 24 21:58:44.062: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 24 21:58:44.075: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:58:44.075: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:58:44.075: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:58:44.075: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:58:44.075: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 24 21:58:44.092: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:58:44.092: INFO: Container kindnet-cni ready: true, restart count 0 May 24 21:58:44.092: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 24 21:58:44.092: INFO: Container kube-bench ready: false, restart count 0 May 24 21:58:44.092: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 24 21:58:44.092: INFO: Container kube-proxy ready: true, restart count 0 May 24 21:58:44.092: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 24 21:58:44.092: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161215bdfc7f5f2d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:58:45.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2692" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":170,"skipped":2937,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:58:45.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-31d4b7df-c7ba-447d-8e13-55c061e75baf STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:58:51.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3860" for this suite. • [SLOW TEST:6.134 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2942,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:58:51.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-aad011ef-d1e4-4b1c-93e3-c38d6437de10 STEP: Creating a pod to test consume configMaps May 24 21:58:51.396: INFO: Waiting up to 5m0s for pod "pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a" in namespace "configmap-6524" to be "success or failure" May 24 21:58:51.400: INFO: Pod "pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.910993ms May 24 21:58:53.418: INFO: Pod "pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021560284s May 24 21:58:55.422: INFO: Pod "pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025375042s May 24 21:58:57.443: INFO: Pod "pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046789849s STEP: Saw pod success May 24 21:58:57.443: INFO: Pod "pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a" satisfied condition "success or failure" May 24 21:58:57.446: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a container configmap-volume-test: STEP: delete the pod May 24 21:58:57.462: INFO: Waiting for pod pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a to disappear May 24 21:58:57.466: INFO: Pod pod-configmaps-f41c98b5-4fcc-41ee-aeb6-be918e968d1a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:58:57.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6524" for this suite. • [SLOW TEST:6.180 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2960,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:58:57.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e0a2e102-e4d6-4d8a-86b5-b454b1221c31 STEP: Creating a pod to test consume secrets May 24 21:58:57.597: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec" in namespace "projected-677" to be "success or failure" May 24 21:58:57.635: INFO: Pod "pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec": Phase="Pending", Reason="", readiness=false. Elapsed: 37.262902ms May 24 21:58:59.639: INFO: Pod "pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041659692s May 24 21:59:01.642: INFO: Pod "pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044436331s STEP: Saw pod success May 24 21:59:01.642: INFO: Pod "pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec" satisfied condition "success or failure" May 24 21:59:01.647: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec container projected-secret-volume-test: STEP: delete the pod May 24 21:59:01.682: INFO: Waiting for pod pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec to disappear May 24 21:59:01.688: INFO: Pod pod-projected-secrets-8bec11d7-4ca7-4131-aba5-fce811a783ec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:01.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-677" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2960,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:01.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:17.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6482" for this suite. • [SLOW TEST:16.107 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":174,"skipped":2967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:17.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:59:17.864: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-195cb217-a6e7-4673-bdd6-9dbaec6d0350" in namespace "security-context-test-9403" to be "success or failure" May 24 21:59:17.893: INFO: Pod "busybox-readonly-false-195cb217-a6e7-4673-bdd6-9dbaec6d0350": Phase="Pending", Reason="", readiness=false. Elapsed: 28.612631ms May 24 21:59:19.897: INFO: Pod "busybox-readonly-false-195cb217-a6e7-4673-bdd6-9dbaec6d0350": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032834362s May 24 21:59:21.945: INFO: Pod "busybox-readonly-false-195cb217-a6e7-4673-bdd6-9dbaec6d0350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080900513s May 24 21:59:21.945: INFO: Pod "busybox-readonly-false-195cb217-a6e7-4673-bdd6-9dbaec6d0350" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:21.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9403" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":3033,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:21.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 24 21:59:22.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1729' May 24 21:59:22.391: INFO: stderr: "" May 24 21:59:22.391: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 24 21:59:23.396: INFO: Selector matched 1 pods for map[app:agnhost] May 24 21:59:23.396: INFO: Found 0 / 1 May 24 21:59:24.490: INFO: Selector matched 1 pods for map[app:agnhost] May 24 21:59:24.490: INFO: Found 0 / 1 May 24 21:59:25.401: INFO: Selector matched 1 pods for map[app:agnhost] May 24 21:59:25.401: INFO: Found 0 / 1 May 24 21:59:26.396: INFO: Selector matched 1 pods for map[app:agnhost] May 24 21:59:26.396: INFO: Found 1 / 1 May 24 21:59:26.396: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 24 21:59:26.400: INFO: Selector matched 1 pods for map[app:agnhost] May 24 21:59:26.400: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 21:59:26.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-kb44c --namespace=kubectl-1729 -p {"metadata":{"annotations":{"x":"y"}}}' May 24 21:59:26.507: INFO: stderr: "" May 24 21:59:26.507: INFO: stdout: "pod/agnhost-master-kb44c patched\n" STEP: checking annotations May 24 21:59:26.527: INFO: Selector matched 1 pods for map[app:agnhost] May 24 21:59:26.527: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:26.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1729" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":176,"skipped":3055,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:26.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:59:26.628: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:27.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2902" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":177,"skipped":3055,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:27.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 21:59:27.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9621' May 24 21:59:27.513: INFO: stderr: "" May 24 21:59:27.513: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 24 21:59:27.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9621' May 24 21:59:39.241: INFO: stderr: "" May 24 21:59:39.241: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:39.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9621" for this suite. • [SLOW TEST:11.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":178,"skipped":3058,"failed":0} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:39.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:59:39.340: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 24 21:59:39.360: INFO: Pod name sample-pod: Found 0 pods out of 1 May 24 21:59:44.363: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 21:59:44.364: INFO: Creating deployment "test-rolling-update-deployment" May 24 21:59:44.367: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 24 21:59:44.406: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 24 21:59:46.414: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 24 21:59:46.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954384, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954384, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954384, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954384, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 21:59:48.442: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 24 21:59:48.459: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2097 /apis/apps/v1/namespaces/deployment-2097/deployments/test-rolling-update-deployment 99700a4a-f3a3-4723-8100-27d3840d77cf 18864048 1 2020-05-24 21:59:44 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00341ecf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-24 21:59:44 +0000 UTC,LastTransitionTime:2020-05-24 21:59:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-24 21:59:47 +0000 UTC,LastTransitionTime:2020-05-24 21:59:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 24 21:59:48.463: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2097 /apis/apps/v1/namespaces/deployment-2097/replicasets/test-rolling-update-deployment-67cf4f6444 3df00930-46d1-4497-94a3-008056c09726 18864037 1 2020-05-24 21:59:44 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 99700a4a-f3a3-4723-8100-27d3840d77cf 0xc00341f607 0xc00341f608}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00341f698 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 24 21:59:48.463: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 24 21:59:48.463: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2097 /apis/apps/v1/namespaces/deployment-2097/replicasets/test-rolling-update-controller 1b3155f7-a1a9-41a8-a61e-527a26b7611a 18864046 2 2020-05-24 21:59:39 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 99700a4a-f3a3-4723-8100-27d3840d77cf 0xc00341f4f7 0xc00341f4f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00341f578 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 21:59:48.466: INFO: Pod "test-rolling-update-deployment-67cf4f6444-zwsjd" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-zwsjd test-rolling-update-deployment-67cf4f6444- deployment-2097 /api/v1/namespaces/deployment-2097/pods/test-rolling-update-deployment-67cf4f6444-zwsjd 488ea00a-f529-43ce-b70c-fcbfbb895f65 18864036 0 2020-05-24 21:59:44 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 3df00930-46d1-4497-94a3-008056c09726 0xc00341fcd7 0xc00341fcd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vvfn2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vvfn2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vvfn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:59:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:59:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:59:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 21:59:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.244,StartTime:2020-05-24 21:59:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 21:59:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://5ee85139306f9b3a188fa241ac9f2de8b092d18042f4bca471867a1962300a78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:48.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2097" for this suite. • [SLOW TEST:9.206 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":179,"skipped":3059,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:48.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 24 21:59:56.544: INFO: 0 pods remaining May 24 21:59:56.544: INFO: 0 pods has nil DeletionTimestamp May 24 21:59:56.544: INFO: STEP: Gathering metrics W0524 21:59:57.424931 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 21:59:57.425: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:57.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9209" for this suite. • [SLOW TEST:8.957 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":180,"skipped":3070,"failed":0} SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:57.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 21:59:58.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5962" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":181,"skipped":3076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 21:59:58.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 21:59:59.923: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 24 21:59:59.955: INFO: Number of nodes with available pods: 0 May 24 21:59:59.955: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 24 22:00:00.299: INFO: Number of nodes with available pods: 0 May 24 22:00:00.299: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:01.303: INFO: Number of nodes with available pods: 0 May 24 22:00:01.303: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:02.303: INFO: Number of nodes with available pods: 0 May 24 22:00:02.303: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:03.322: INFO: Number of nodes with available pods: 1 May 24 22:00:03.322: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 24 22:00:03.363: INFO: Number of nodes with available pods: 1 May 24 22:00:03.363: INFO: Number of running nodes: 0, number of available pods: 1 May 24 22:00:04.374: INFO: Number of nodes with available pods: 0 May 24 22:00:04.374: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 24 22:00:04.392: INFO: Number of nodes with available pods: 0 May 24 22:00:04.392: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:05.396: INFO: Number of nodes with available pods: 0 May 24 22:00:05.396: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:06.397: INFO: Number of nodes with available pods: 0 May 24 22:00:06.397: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:07.397: INFO: Number of nodes with available pods: 0 May 24 22:00:07.397: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:08.396: INFO: Number of nodes with available pods: 0 May 24 22:00:08.396: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:09.397: INFO: Number of nodes with available pods: 0 May 24 22:00:09.397: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:10.397: INFO: Number of nodes with available pods: 0 May 24 22:00:10.397: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:11.397: INFO: Number of nodes with available pods: 0 May 24 22:00:11.397: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:12.413: INFO: Number of nodes with available pods: 1 May 24 22:00:12.413: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7105, will wait for the garbage collector to delete the pods May 24 22:00:12.478: INFO: Deleting DaemonSet.extensions daemon-set took: 6.569027ms May 24 22:00:12.779: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.264703ms May 24 22:00:19.583: INFO: Number of nodes with available pods: 0 May 24 22:00:19.583: INFO: Number of running nodes: 0, number of available pods: 0 May 24 22:00:19.586: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7105/daemonsets","resourceVersion":"18864364"},"items":null} May 24 22:00:19.588: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7105/pods","resourceVersion":"18864364"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:00:19.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7105" for this suite. • [SLOW TEST:20.716 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":182,"skipped":3168,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:00:19.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 24 22:00:19.715: INFO: Waiting up to 5m0s for pod "pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769" in namespace "emptydir-7098" to be "success or failure" May 24 22:00:19.739: INFO: Pod "pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769": Phase="Pending", Reason="", readiness=false. Elapsed: 23.911829ms May 24 22:00:21.743: INFO: Pod "pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028151968s May 24 22:00:23.747: INFO: Pod "pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032227609s STEP: Saw pod success May 24 22:00:23.747: INFO: Pod "pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769" satisfied condition "success or failure" May 24 22:00:23.750: INFO: Trying to get logs from node jerma-worker2 pod pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769 container test-container: STEP: delete the pod May 24 22:00:23.788: INFO: Waiting for pod pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769 to disappear May 24 22:00:23.826: INFO: Pod pod-7205f40d-c2ea-442c-ab3b-7c8dcfa5a769 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:00:23.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7098" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3174,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:00:23.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:00:23.922: INFO: Create a RollingUpdate DaemonSet May 24 22:00:23.925: INFO: Check that daemon pods launch on every node of the cluster May 24 22:00:23.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:23.955: INFO: Number of nodes with available pods: 0 May 24 22:00:23.955: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:24.978: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:24.981: INFO: Number of nodes with available pods: 0 May 24 22:00:24.981: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:25.959: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:25.962: INFO: Number of nodes with available pods: 0 May 24 22:00:25.962: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:26.968: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:26.973: INFO: Number of nodes with available pods: 0 May 24 22:00:26.973: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:27.960: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:27.963: INFO: Number of nodes with available pods: 0 May 24 22:00:27.963: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:29.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:29.071: INFO: Number of nodes with available pods: 2 May 24 22:00:29.071: INFO: Number of running nodes: 2, number of available pods: 2 May 24 22:00:29.071: INFO: Update the DaemonSet to trigger a rollout May 24 22:00:29.076: INFO: Updating DaemonSet daemon-set May 24 22:00:40.157: INFO: Roll back the DaemonSet before rollout is complete May 24 22:00:40.162: INFO: Updating DaemonSet daemon-set May 24 22:00:40.162: INFO: Make sure DaemonSet rollback is complete May 24 22:00:40.228: INFO: Wrong image for pod: daemon-set-lvcqz. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 24 22:00:40.228: INFO: Pod daemon-set-lvcqz is not available May 24 22:00:40.232: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:41.237: INFO: Wrong image for pod: daemon-set-lvcqz. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 24 22:00:41.237: INFO: Pod daemon-set-lvcqz is not available May 24 22:00:41.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:42.252: INFO: Pod daemon-set-ctsgl is not available May 24 22:00:42.256: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4061, will wait for the garbage collector to delete the pods May 24 22:00:42.341: INFO: Deleting DaemonSet.extensions daemon-set took: 7.481275ms May 24 22:00:42.641: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.428376ms May 24 22:00:46.245: INFO: Number of nodes with available pods: 0 May 24 22:00:46.245: INFO: Number of running nodes: 0, number of available pods: 0 May 24 22:00:46.248: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4061/daemonsets","resourceVersion":"18864561"},"items":null} May 24 22:00:46.251: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4061/pods","resourceVersion":"18864561"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:00:46.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4061" for this suite. • [SLOW TEST:22.433 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":184,"skipped":3186,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:00:46.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 24 22:00:46.382: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:46.408: INFO: Number of nodes with available pods: 0 May 24 22:00:46.408: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:47.419: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:47.423: INFO: Number of nodes with available pods: 0 May 24 22:00:47.423: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:48.413: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:48.417: INFO: Number of nodes with available pods: 0 May 24 22:00:48.417: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:49.419: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:49.422: INFO: Number of nodes with available pods: 0 May 24 22:00:49.422: INFO: Node jerma-worker is running more than one daemon pod May 24 22:00:50.414: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:50.418: INFO: Number of nodes with available pods: 1 May 24 22:00:50.418: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:51.418: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:51.423: INFO: Number of nodes with available pods: 2 May 24 22:00:51.423: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 24 22:00:51.534: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:51.564: INFO: Number of nodes with available pods: 1 May 24 22:00:51.564: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:52.587: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:52.591: INFO: Number of nodes with available pods: 1 May 24 22:00:52.591: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:53.567: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:53.678: INFO: Number of nodes with available pods: 1 May 24 22:00:53.678: INFO: Node jerma-worker2 is running more than one daemon pod May 24 22:00:54.569: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 22:00:54.573: INFO: Number of nodes with available pods: 2 May 24 22:00:54.573: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3336, will wait for the garbage collector to delete the pods May 24 22:00:54.655: INFO: Deleting DaemonSet.extensions daemon-set took: 6.116038ms May 24 22:00:54.756: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.241514ms May 24 22:01:09.575: INFO: Number of nodes with available pods: 0 May 24 22:01:09.575: INFO: Number of running nodes: 0, number of available pods: 0 May 24 22:01:09.578: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3336/daemonsets","resourceVersion":"18864713"},"items":null} May 24 22:01:09.580: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3336/pods","resourceVersion":"18864713"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:01:09.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3336" for this suite. • [SLOW TEST:23.329 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":185,"skipped":3187,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:01:09.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0524 22:01:50.686030 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 22:01:50.686: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:01:50.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9724" for this suite. • [SLOW TEST:41.095 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":186,"skipped":3204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:01:50.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 24 22:01:51.616: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 24 22:01:53.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 22:01:56.791: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:01:56.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:01:59.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7740" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.406 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":187,"skipped":3249,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:02:00.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-bmp4 STEP: Creating a pod to test atomic-volume-subpath May 24 22:02:01.390: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bmp4" in namespace "subpath-6431" to be "success or failure" May 24 22:02:01.414: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.897515ms May 24 22:02:03.417: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027847943s May 24 22:02:05.422: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 4.032614687s May 24 22:02:07.427: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 6.036928726s May 24 22:02:09.430: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 8.040437784s May 24 22:02:11.434: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 10.044835771s May 24 22:02:13.439: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 12.049387886s May 24 22:02:15.444: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 14.054013448s May 24 22:02:17.447: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 16.057649041s May 24 22:02:19.452: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 18.06235577s May 24 22:02:21.457: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 20.066968271s May 24 22:02:23.460: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Running", Reason="", readiness=true. Elapsed: 22.070336158s May 24 22:02:25.465: INFO: Pod "pod-subpath-test-downwardapi-bmp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.075326851s STEP: Saw pod success May 24 22:02:25.465: INFO: Pod "pod-subpath-test-downwardapi-bmp4" satisfied condition "success or failure" May 24 22:02:25.468: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-bmp4 container test-container-subpath-downwardapi-bmp4: STEP: delete the pod May 24 22:02:25.503: INFO: Waiting for pod pod-subpath-test-downwardapi-bmp4 to disappear May 24 22:02:25.507: INFO: Pod pod-subpath-test-downwardapi-bmp4 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bmp4 May 24 22:02:25.507: INFO: Deleting pod "pod-subpath-test-downwardapi-bmp4" in namespace "subpath-6431" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:02:25.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6431" for this suite. • [SLOW TEST:25.415 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":188,"skipped":3259,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:02:25.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 22:02:25.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4828' May 24 22:02:25.680: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 22:02:25.681: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 24 22:02:25.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4828' May 24 22:02:25.818: INFO: stderr: "" May 24 22:02:25.818: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:02:25.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4828" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":189,"skipped":3268,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:02:25.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 24 22:02:25.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1862' May 24 22:02:26.229: INFO: stderr: "" May 24 22:02:26.229: INFO: stdout: "pod/pause created\n" May 24 22:02:26.229: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 24 22:02:26.229: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1862" to be "running and ready" May 24 22:02:26.232: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617921ms May 24 22:02:28.236: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006353559s May 24 22:02:30.239: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.010176046s May 24 22:02:30.239: INFO: Pod "pause" satisfied condition "running and ready" May 24 22:02:30.240: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 24 22:02:30.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1862' May 24 22:02:30.349: INFO: stderr: "" May 24 22:02:30.349: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 24 22:02:30.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1862' May 24 22:02:30.452: INFO: stderr: "" May 24 22:02:30.452: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 24 22:02:30.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1862' May 24 22:02:30.555: INFO: stderr: "" May 24 22:02:30.555: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 24 22:02:30.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1862' May 24 22:02:30.649: INFO: stderr: "" May 24 22:02:30.649: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 24 22:02:30.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1862' May 24 22:02:31.024: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 22:02:31.024: INFO: stdout: "pod \"pause\" force deleted\n" May 24 22:02:31.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1862' May 24 22:02:31.392: INFO: stderr: "No resources found in kubectl-1862 namespace.\n" May 24 22:02:31.392: INFO: stdout: "" May 24 22:02:31.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1862 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 22:02:31.478: INFO: stderr: "" May 24 22:02:31.478: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:02:31.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1862" for this suite. • [SLOW TEST:5.678 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":190,"skipped":3277,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:02:31.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-ab608be8-2e4d-4edd-9531-bc3f2681811c in namespace container-probe-5054 May 24 22:02:35.916: INFO: Started pod liveness-ab608be8-2e4d-4edd-9531-bc3f2681811c in namespace container-probe-5054 STEP: checking the pod's current state and verifying that restartCount is present May 24 22:02:35.919: INFO: Initial restart count of pod liveness-ab608be8-2e4d-4edd-9531-bc3f2681811c is 0 May 24 22:02:55.976: INFO: Restart count of pod container-probe-5054/liveness-ab608be8-2e4d-4edd-9531-bc3f2681811c is now 1 (20.056633418s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:02:55.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5054" for this suite. • [SLOW TEST:24.495 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3280,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:02:56.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-2e93b80a-2915-4612-80bd-2b00ed29ed3a in namespace container-probe-4060 May 24 22:03:00.094: INFO: Started pod busybox-2e93b80a-2915-4612-80bd-2b00ed29ed3a in namespace container-probe-4060 STEP: checking the pod's current state and verifying that restartCount is present May 24 22:03:00.096: INFO: Initial restart count of pod busybox-2e93b80a-2915-4612-80bd-2b00ed29ed3a is 0 May 24 22:03:56.234: INFO: Restart count of pod container-probe-4060/busybox-2e93b80a-2915-4612-80bd-2b00ed29ed3a is now 1 (56.137786816s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:03:56.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4060" for this suite. • [SLOW TEST:60.302 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:03:56.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 24 22:04:00.952: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4208 pod-service-account-6440ecd6-617d-4c6b-b8ca-971e5b49ae93 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 24 22:04:04.050: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4208 pod-service-account-6440ecd6-617d-4c6b-b8ca-971e5b49ae93 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 24 22:04:04.239: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4208 pod-service-account-6440ecd6-617d-4c6b-b8ca-971e5b49ae93 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:04:04.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4208" for this suite. • [SLOW TEST:8.154 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":193,"skipped":3336,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:04:04.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:04:04.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 24 22:04:04.680: INFO: stderr: "" May 24 22:04:04.680: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:04:04.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8815" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":194,"skipped":3342,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:04:04.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b May 24 22:04:04.784: INFO: Pod name my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b: Found 0 pods out of 1 May 24 22:04:09.812: INFO: Pod name my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b: Found 1 pods out of 1 May 24 22:04:09.812: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b" are running May 24 22:04:09.815: INFO: Pod "my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b-gpqqz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:04:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:04:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:04:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:04:04 +0000 UTC Reason: Message:}]) May 24 22:04:09.815: INFO: Trying to dial the pod May 24 22:04:14.828: INFO: Controller my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b: Got expected result from replica 1 [my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b-gpqqz]: "my-hostname-basic-3f1fd93a-89bd-41dd-90c9-e69e99c0436b-gpqqz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:04:14.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8778" for this suite. • [SLOW TEST:10.146 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":195,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:04:14.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 22:04:14.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30" in namespace "projected-5450" to be "success or failure" May 24 22:04:14.963: INFO: Pod "downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30": Phase="Pending", Reason="", readiness=false. Elapsed: 30.855757ms May 24 22:04:16.986: INFO: Pod "downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053763319s May 24 22:04:18.990: INFO: Pod "downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05820855s STEP: Saw pod success May 24 22:04:18.990: INFO: Pod "downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30" satisfied condition "success or failure" May 24 22:04:18.993: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30 container client-container: STEP: delete the pod May 24 22:04:19.136: INFO: Waiting for pod downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30 to disappear May 24 22:04:19.187: INFO: Pod downwardapi-volume-49e39657-011a-495c-a33b-b991326d8b30 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:04:19.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5450" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:04:19.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:04:19.380: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-558f928e-7e16-4d9a-8ff9-64090a09230c" in namespace "security-context-test-4545" to be "success or failure" May 24 22:04:19.383: INFO: Pod "busybox-privileged-false-558f928e-7e16-4d9a-8ff9-64090a09230c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241721ms May 24 22:04:21.422: INFO: Pod "busybox-privileged-false-558f928e-7e16-4d9a-8ff9-64090a09230c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042167975s May 24 22:04:23.432: INFO: Pod "busybox-privileged-false-558f928e-7e16-4d9a-8ff9-64090a09230c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051502733s May 24 22:04:23.432: INFO: Pod "busybox-privileged-false-558f928e-7e16-4d9a-8ff9-64090a09230c" satisfied condition "success or failure" May 24 22:04:23.442: INFO: Got logs for pod "busybox-privileged-false-558f928e-7e16-4d9a-8ff9-64090a09230c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:04:23.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4545" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3412,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:04:23.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 24 22:04:23.531: INFO: Waiting up to 5m0s for pod "pod-e451255f-69ce-49e0-9c9e-626f9640f567" in namespace "emptydir-2434" to be "success or failure" May 24 22:04:23.534: INFO: Pod "pod-e451255f-69ce-49e0-9c9e-626f9640f567": Phase="Pending", Reason="", readiness=false. Elapsed: 3.180145ms May 24 22:04:25.590: INFO: Pod "pod-e451255f-69ce-49e0-9c9e-626f9640f567": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059407515s May 24 22:04:27.594: INFO: Pod "pod-e451255f-69ce-49e0-9c9e-626f9640f567": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063271176s STEP: Saw pod success May 24 22:04:27.594: INFO: Pod "pod-e451255f-69ce-49e0-9c9e-626f9640f567" satisfied condition "success or failure" May 24 22:04:27.596: INFO: Trying to get logs from node jerma-worker2 pod pod-e451255f-69ce-49e0-9c9e-626f9640f567 container test-container: STEP: delete the pod May 24 22:04:27.631: INFO: Waiting for pod pod-e451255f-69ce-49e0-9c9e-626f9640f567 to disappear May 24 22:04:27.686: INFO: Pod pod-e451255f-69ce-49e0-9c9e-626f9640f567 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:04:27.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2434" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3420,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:04:27.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6542 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6542 STEP: creating replication controller externalsvc in namespace services-6542 I0524 22:04:27.930671 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6542, replica count: 2 I0524 22:04:30.981303 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 22:04:33.981676 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 24 22:04:34.042: INFO: Creating new exec pod May 24 22:04:38.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6542 execpod2vrkf -- /bin/sh -x -c nslookup nodeport-service' May 24 22:04:38.274: INFO: stderr: "I0524 22:04:38.196603 2802 log.go:172] (0xc0009e49a0) (0xc000621ae0) Create stream\nI0524 22:04:38.196692 2802 log.go:172] (0xc0009e49a0) (0xc000621ae0) Stream added, broadcasting: 1\nI0524 22:04:38.199441 2802 log.go:172] (0xc0009e49a0) Reply frame received for 1\nI0524 22:04:38.199512 2802 log.go:172] (0xc0009e49a0) (0xc000621d60) Create stream\nI0524 22:04:38.199541 2802 log.go:172] (0xc0009e49a0) (0xc000621d60) Stream added, broadcasting: 3\nI0524 22:04:38.200663 2802 log.go:172] (0xc0009e49a0) Reply frame received for 3\nI0524 22:04:38.200721 2802 log.go:172] (0xc0009e49a0) (0xc00043e000) Create stream\nI0524 22:04:38.200738 2802 log.go:172] (0xc0009e49a0) (0xc00043e000) Stream added, broadcasting: 5\nI0524 22:04:38.201980 2802 log.go:172] (0xc0009e49a0) Reply frame received for 5\nI0524 22:04:38.254082 2802 log.go:172] (0xc0009e49a0) Data frame received for 5\nI0524 22:04:38.254116 2802 log.go:172] (0xc00043e000) (5) Data frame handling\nI0524 22:04:38.254137 2802 log.go:172] (0xc00043e000) (5) Data frame sent\n+ nslookup nodeport-service\nI0524 22:04:38.264369 2802 log.go:172] (0xc0009e49a0) Data frame received for 3\nI0524 22:04:38.264396 2802 log.go:172] (0xc000621d60) (3) Data frame handling\nI0524 22:04:38.264415 2802 log.go:172] (0xc000621d60) (3) Data frame sent\nI0524 22:04:38.266040 2802 log.go:172] (0xc0009e49a0) Data frame received for 3\nI0524 22:04:38.266086 2802 log.go:172] (0xc000621d60) (3) Data frame handling\nI0524 22:04:38.266120 2802 log.go:172] (0xc000621d60) (3) Data frame sent\nI0524 22:04:38.266247 2802 log.go:172] (0xc0009e49a0) Data frame received for 5\nI0524 22:04:38.266290 2802 log.go:172] (0xc00043e000) (5) Data frame handling\nI0524 22:04:38.266315 2802 log.go:172] (0xc0009e49a0) Data frame received for 3\nI0524 22:04:38.266331 2802 log.go:172] (0xc000621d60) (3) Data frame handling\nI0524 22:04:38.268070 2802 log.go:172] (0xc0009e49a0) Data frame received for 1\nI0524 22:04:38.268104 2802 log.go:172] (0xc000621ae0) (1) Data frame handling\nI0524 22:04:38.268147 2802 log.go:172] (0xc000621ae0) (1) Data frame sent\nI0524 22:04:38.268178 2802 log.go:172] (0xc0009e49a0) (0xc000621ae0) Stream removed, broadcasting: 1\nI0524 22:04:38.268377 2802 log.go:172] (0xc0009e49a0) Go away received\nI0524 22:04:38.268694 2802 log.go:172] (0xc0009e49a0) (0xc000621ae0) Stream removed, broadcasting: 1\nI0524 22:04:38.268719 2802 log.go:172] (0xc0009e49a0) (0xc000621d60) Stream removed, broadcasting: 3\nI0524 22:04:38.268732 2802 log.go:172] (0xc0009e49a0) (0xc00043e000) Stream removed, broadcasting: 5\n" May 24 22:04:38.274: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6542.svc.cluster.local\tcanonical name = externalsvc.services-6542.svc.cluster.local.\nName:\texternalsvc.services-6542.svc.cluster.local\nAddress: 10.101.198.68\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6542, will wait for the garbage collector to delete the pods May 24 22:04:38.335: INFO: Deleting ReplicationController externalsvc took: 7.491351ms May 24 22:04:38.735: INFO: Terminating ReplicationController externalsvc pods took: 400.243249ms May 24 22:04:49.703: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:04:49.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6542" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.025 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":199,"skipped":3447,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:04:49.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 24 22:04:49.824: INFO: >>> kubeConfig: /root/.kube/config May 24 22:04:52.745: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:05:03.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8190" for this suite. • [SLOW TEST:13.451 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":200,"skipped":3450,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:05:03.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:05:14.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3108" for this suite. • [SLOW TEST:11.189 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":201,"skipped":3454,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:05:14.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b956b3e5-cc6b-41c6-9af7-e598e3801cc0 STEP: Creating a pod to test consume configMaps May 24 22:05:14.448: INFO: Waiting up to 5m0s for pod "pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935" in namespace "configmap-3366" to be "success or failure" May 24 22:05:14.451: INFO: Pod "pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935": Phase="Pending", Reason="", readiness=false. Elapsed: 3.407083ms May 24 22:05:16.455: INFO: Pod "pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112934s May 24 22:05:18.459: INFO: Pod "pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935": Phase="Running", Reason="", readiness=true. Elapsed: 4.010726715s May 24 22:05:20.462: INFO: Pod "pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014474203s STEP: Saw pod success May 24 22:05:20.462: INFO: Pod "pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935" satisfied condition "success or failure" May 24 22:05:20.488: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935 container configmap-volume-test: STEP: delete the pod May 24 22:05:20.526: INFO: Waiting for pod pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935 to disappear May 24 22:05:20.547: INFO: Pod pod-configmaps-4421fc01-4a6c-455e-97c5-a8210dff1935 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:05:20.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3366" for this suite. • [SLOW TEST:6.171 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3471,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:05:20.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 22:05:21.418: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 22:05:23.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 22:05:25.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954721, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 22:05:28.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:05:28.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1225" for this suite. STEP: Destroying namespace "webhook-1225-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.018 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":203,"skipped":3488,"failed":0} [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:05:28.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 22:05:28.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6559' May 24 22:05:28.774: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 22:05:28.774: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 24 22:05:28.838: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 24 22:05:28.969: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 24 22:05:29.057: INFO: scanned /root for discovery docs: May 24 22:05:29.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6559' May 24 22:05:45.029: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 24 22:05:45.029: INFO: stdout: "Created e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e\nScaling up e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 24 22:05:45.029: INFO: stdout: "Created e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e\nScaling up e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 24 22:05:45.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6559' May 24 22:05:45.138: INFO: stderr: "" May 24 22:05:45.138: INFO: stdout: "e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e-n47tk " May 24 22:05:45.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e-n47tk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6559' May 24 22:05:45.229: INFO: stderr: "" May 24 22:05:45.229: INFO: stdout: "true" May 24 22:05:45.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e-n47tk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6559' May 24 22:05:45.324: INFO: stderr: "" May 24 22:05:45.324: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 24 22:05:45.324: INFO: e2e-test-httpd-rc-454b38c09f031b3e7dba8f480ef8750e-n47tk is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 24 22:05:45.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6559' May 24 22:05:45.423: INFO: stderr: "" May 24 22:05:45.423: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:05:45.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6559" for this suite. • [SLOW TEST:16.940 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":204,"skipped":3488,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:05:45.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 22:05:49.940: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:05:49.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5372" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3493,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:05:49.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 22:05:50.740: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 22:05:52.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954750, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954750, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954750, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954750, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 22:05:55.819: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:06:05.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7164" for this suite. STEP: Destroying namespace "webhook-7164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.100 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":206,"skipped":3502,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:06:06.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 24 22:06:06.174: INFO: Waiting up to 5m0s for pod "downward-api-b190c90d-094a-4011-9f4d-f3f899230f95" in namespace "downward-api-5147" to be "success or failure" May 24 22:06:06.190: INFO: Pod "downward-api-b190c90d-094a-4011-9f4d-f3f899230f95": Phase="Pending", Reason="", readiness=false. Elapsed: 15.766368ms May 24 22:06:08.199: INFO: Pod "downward-api-b190c90d-094a-4011-9f4d-f3f899230f95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024762683s May 24 22:06:10.203: INFO: Pod "downward-api-b190c90d-094a-4011-9f4d-f3f899230f95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029134611s STEP: Saw pod success May 24 22:06:10.203: INFO: Pod "downward-api-b190c90d-094a-4011-9f4d-f3f899230f95" satisfied condition "success or failure" May 24 22:06:10.206: INFO: Trying to get logs from node jerma-worker pod downward-api-b190c90d-094a-4011-9f4d-f3f899230f95 container dapi-container: STEP: delete the pod May 24 22:06:10.444: INFO: Waiting for pod downward-api-b190c90d-094a-4011-9f4d-f3f899230f95 to disappear May 24 22:06:10.501: INFO: Pod downward-api-b190c90d-094a-4011-9f4d-f3f899230f95 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:06:10.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5147" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3511,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:06:10.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 24 22:06:10.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 24 22:06:10.824: INFO: stderr: "" May 24 22:06:10.824: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:06:10.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5475" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":208,"skipped":3531,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:06:10.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1010.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1010.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 22:06:16.939: INFO: DNS probes using dns-1010/dns-test-abd8def0-31e1-40df-90d5-f6319f37e97f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:06:17.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1010" for this suite. • [SLOW TEST:6.237 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":209,"skipped":3534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:06:17.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 24 22:06:17.247: INFO: Waiting up to 5m0s for pod "pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e" in namespace "emptydir-3057" to be "success or failure" May 24 22:06:17.268: INFO: Pod "pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.942107ms May 24 22:06:19.273: INFO: Pod "pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025352682s May 24 22:06:21.277: INFO: Pod "pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029333155s STEP: Saw pod success May 24 22:06:21.277: INFO: Pod "pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e" satisfied condition "success or failure" May 24 22:06:21.280: INFO: Trying to get logs from node jerma-worker2 pod pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e container test-container: STEP: delete the pod May 24 22:06:21.330: INFO: Waiting for pod pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e to disappear May 24 22:06:21.349: INFO: Pod pod-4fdc499c-f2da-45a2-9a61-c7a69d29e28e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:06:21.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3057" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:06:21.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-eb3b1b36-9c94-418b-a3e4-267224fd6325 STEP: Creating secret with name s-test-opt-upd-73d28722-903e-4f90-ba1f-81fb1f23c72d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-eb3b1b36-9c94-418b-a3e4-267224fd6325 STEP: Updating secret s-test-opt-upd-73d28722-903e-4f90-ba1f-81fb1f23c72d STEP: Creating secret with name s-test-opt-create-bf32a7a4-900c-46b0-91c6-6f13eef82e10 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:07:36.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2654" for this suite. • [SLOW TEST:74.771 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3610,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:07:36.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 24 22:07:36.227: INFO: Waiting up to 5m0s for pod "pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2" in namespace "emptydir-7938" to be "success or failure" May 24 22:07:36.231: INFO: Pod "pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.745506ms May 24 22:07:38.269: INFO: Pod "pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042693861s May 24 22:07:40.274: INFO: Pod "pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047252924s STEP: Saw pod success May 24 22:07:40.274: INFO: Pod "pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2" satisfied condition "success or failure" May 24 22:07:40.278: INFO: Trying to get logs from node jerma-worker pod pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2 container test-container: STEP: delete the pod May 24 22:07:40.324: INFO: Waiting for pod pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2 to disappear May 24 22:07:40.353: INFO: Pod pod-4f6ccf3f-a1e5-42fe-975a-466a065a7ca2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:07:40.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7938" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:07:40.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 22:07:40.678: INFO: Waiting up to 5m0s for pod "pod-c9648a4e-c7de-427c-a9d6-0844542203bc" in namespace "emptydir-3892" to be "success or failure" May 24 22:07:40.698: INFO: Pod "pod-c9648a4e-c7de-427c-a9d6-0844542203bc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.299762ms May 24 22:07:42.702: INFO: Pod "pod-c9648a4e-c7de-427c-a9d6-0844542203bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02448307s May 24 22:07:44.707: INFO: Pod "pod-c9648a4e-c7de-427c-a9d6-0844542203bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028697506s STEP: Saw pod success May 24 22:07:44.707: INFO: Pod "pod-c9648a4e-c7de-427c-a9d6-0844542203bc" satisfied condition "success or failure" May 24 22:07:44.710: INFO: Trying to get logs from node jerma-worker pod pod-c9648a4e-c7de-427c-a9d6-0844542203bc container test-container: STEP: delete the pod May 24 22:07:44.744: INFO: Waiting for pod pod-c9648a4e-c7de-427c-a9d6-0844542203bc to disappear May 24 22:07:44.752: INFO: Pod pod-c9648a4e-c7de-427c-a9d6-0844542203bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:07:44.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3892" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3650,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:07:44.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:08:16.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3606" for this suite. • [SLOW TEST:31.515 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:08:16.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 22:08:17.599: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 22:08:19.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954897, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954897, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954897, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954897, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 22:08:22.756: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:08:22.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6473" for this suite. STEP: Destroying namespace "webhook-6473-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.740 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":215,"skipped":3677,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:08:23.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0524 22:08:33.121243 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 22:08:33.121: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:08:33.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1496" for this suite. • [SLOW TEST:10.114 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":216,"skipped":3693,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:08:33.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1c66e5c6-63db-4ec6-8fae-8dace940e40e STEP: Creating a pod to test consume secrets May 24 22:08:33.283: INFO: Waiting up to 5m0s for pod "pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606" in namespace "secrets-3064" to be "success or failure" May 24 22:08:33.323: INFO: Pod "pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606": Phase="Pending", Reason="", readiness=false. Elapsed: 39.51303ms May 24 22:08:35.327: INFO: Pod "pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043648949s May 24 22:08:37.331: INFO: Pod "pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606": Phase="Running", Reason="", readiness=true. Elapsed: 4.047626161s May 24 22:08:39.334: INFO: Pod "pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051357621s STEP: Saw pod success May 24 22:08:39.335: INFO: Pod "pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606" satisfied condition "success or failure" May 24 22:08:39.337: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606 container secret-volume-test: STEP: delete the pod May 24 22:08:39.354: INFO: Waiting for pod pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606 to disappear May 24 22:08:39.359: INFO: Pod pod-secrets-b48f0c03-b0c8-4da8-9fd4-fca33989a606 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:08:39.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3064" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3704,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:08:39.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:08:39.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1653' May 24 22:08:39.769: INFO: stderr: "" May 24 22:08:39.769: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 24 22:08:39.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1653' May 24 22:08:40.045: INFO: stderr: "" May 24 22:08:40.045: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 24 22:08:41.049: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:08:41.049: INFO: Found 0 / 1 May 24 22:08:42.079: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:08:42.079: INFO: Found 0 / 1 May 24 22:08:43.049: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:08:43.049: INFO: Found 1 / 1 May 24 22:08:43.049: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 22:08:43.052: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:08:43.052: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 22:08:43.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-j4d5n --namespace=kubectl-1653' May 24 22:08:43.174: INFO: stderr: "" May 24 22:08:43.174: INFO: stdout: "Name: agnhost-master-j4d5n\nNamespace: kubectl-1653\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Sun, 24 May 2020 22:08:39 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.23\nIPs:\n IP: 10.244.2.23\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://7492515f514f37f6f1d389a14c8e554acbf3d3e5893b8b86b72a2678e94d8d31\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 24 May 2020 22:08:42 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-zk4jw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-zk4jw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-zk4jw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-1653/agnhost-master-j4d5n to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" May 24 22:08:43.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1653' May 24 22:08:43.298: INFO: stderr: "" May 24 22:08:43.298: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1653\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-j4d5n\n" May 24 22:08:43.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1653' May 24 22:08:43.403: INFO: stderr: "" May 24 22:08:43.403: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1653\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.89.233\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.23:6379\nSession Affinity: None\nEvents: \n" May 24 22:08:43.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 24 22:08:43.530: INFO: stderr: "" May 24 22:08:43.530: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sun, 24 May 2020 22:08:35 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 24 May 2020 22:07:47 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 24 May 2020 22:07:47 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 24 May 2020 22:07:47 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 24 May 2020 22:07:47 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 70d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 70d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 70d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 70d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 70d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 70d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 70d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 70d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 70d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 24 22:08:43.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1653' May 24 22:08:43.640: INFO: stderr: "" May 24 22:08:43.640: INFO: stdout: "Name: kubectl-1653\nLabels: e2e-framework=kubectl\n e2e-run=c97398e3-2977-49fe-add3-364fa823d11a\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:08:43.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1653" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":218,"skipped":3705,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:08:43.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4569 to expose endpoints map[] May 24 22:08:43.781: INFO: Get endpoints failed (14.171311ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 24 22:08:44.785: INFO: successfully validated that service endpoint-test2 in namespace services-4569 exposes endpoints map[] (1.018308313s elapsed) STEP: Creating pod pod1 in namespace services-4569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4569 to expose endpoints map[pod1:[80]] May 24 22:08:47.836: INFO: successfully validated that service endpoint-test2 in namespace services-4569 exposes endpoints map[pod1:[80]] (3.043605969s elapsed) STEP: Creating pod pod2 in namespace services-4569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4569 to expose endpoints map[pod1:[80] pod2:[80]] May 24 22:08:52.137: INFO: successfully validated that service endpoint-test2 in namespace services-4569 exposes endpoints map[pod1:[80] pod2:[80]] (4.29132869s elapsed) STEP: Deleting pod pod1 in namespace services-4569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4569 to expose endpoints map[pod2:[80]] May 24 22:08:52.202: INFO: successfully validated that service endpoint-test2 in namespace services-4569 exposes endpoints map[pod2:[80]] (59.786892ms elapsed) STEP: Deleting pod pod2 in namespace services-4569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4569 to expose endpoints map[] May 24 22:08:53.252: INFO: successfully validated that service endpoint-test2 in namespace services-4569 exposes endpoints map[] (1.021622954s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:08:53.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4569" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.652 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":219,"skipped":3709,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:08:53.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-ec7d506e-c2ba-4c30-9f0c-2c98868f2d6e STEP: Creating configMap with name cm-test-opt-upd-162e1ac8-8c38-4a4c-96c0-7b0afa9309c4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ec7d506e-c2ba-4c30-9f0c-2c98868f2d6e STEP: Updating configmap cm-test-opt-upd-162e1ac8-8c38-4a4c-96c0-7b0afa9309c4 STEP: Creating configMap with name cm-test-opt-create-a444b76a-70b0-4dd2-b0bb-9154ee7b1e13 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:01.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5898" for this suite. • [SLOW TEST:8.270 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3720,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:01.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:09:01.632: INFO: Creating ReplicaSet my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632 May 24 22:09:01.650: INFO: Pod name my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632: Found 0 pods out of 1 May 24 22:09:06.653: INFO: Pod name my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632: Found 1 pods out of 1 May 24 22:09:06.653: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632" is running May 24 22:09:06.656: INFO: Pod "my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632-q5vl5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:09:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:09:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:09:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 22:09:01 +0000 UTC Reason: Message:}]) May 24 22:09:06.656: INFO: Trying to dial the pod May 24 22:09:11.667: INFO: Controller my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632: Got expected result from replica 1 [my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632-q5vl5]: "my-hostname-basic-8015d0a9-b379-4236-bcb5-faa515613632-q5vl5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:11.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7580" for this suite. • [SLOW TEST:10.105 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":221,"skipped":3732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:11.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:09:11.776: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 42.91657ms) May 24 22:09:11.780: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.27189ms) May 24 22:09:11.784: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.681467ms) May 24 22:09:11.788: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.752264ms) May 24 22:09:11.792: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.252301ms) May 24 22:09:11.796: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.590287ms) May 24 22:09:11.799: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.955234ms) May 24 22:09:11.803: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.433042ms) May 24 22:09:11.807: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.63776ms) May 24 22:09:11.810: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.793895ms) May 24 22:09:11.815: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.82151ms) May 24 22:09:11.819: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.15886ms) May 24 22:09:11.822: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.042714ms) May 24 22:09:11.825: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.335596ms) May 24 22:09:11.828: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.008031ms) May 24 22:09:11.832: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.730527ms) May 24 22:09:11.835: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.459086ms) May 24 22:09:11.838: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.137801ms) May 24 22:09:11.841: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.800381ms) May 24 22:09:11.844: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.327ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:11.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7734" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":222,"skipped":3771,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:11.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 22:09:20.016: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 22:09:20.021: INFO: Pod pod-with-prestop-exec-hook still exists May 24 22:09:22.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 22:09:22.026: INFO: Pod pod-with-prestop-exec-hook still exists May 24 22:09:24.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 22:09:24.024: INFO: Pod pod-with-prestop-exec-hook still exists May 24 22:09:26.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 22:09:26.025: INFO: Pod pod-with-prestop-exec-hook still exists May 24 22:09:28.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 22:09:28.026: INFO: Pod pod-with-prestop-exec-hook still exists May 24 22:09:30.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 22:09:30.025: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:30.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8231" for this suite. • [SLOW TEST:18.190 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:30.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 22:09:34.148: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:34.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6287" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3825,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:34.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:09:34.447: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 31.235966ms) May 24 22:09:34.451: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.74803ms) May 24 22:09:34.454: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.795824ms) May 24 22:09:34.457: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.535656ms) May 24 22:09:34.460: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.117779ms) May 24 22:09:34.464: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.374045ms) May 24 22:09:34.467: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.339261ms) May 24 22:09:34.470: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.766293ms) May 24 22:09:34.473: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.533479ms) May 24 22:09:34.475: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.887956ms) May 24 22:09:34.479: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.034958ms) May 24 22:09:34.481: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.735258ms) May 24 22:09:34.484: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.717612ms) May 24 22:09:34.487: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.529328ms) May 24 22:09:34.489: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.786097ms) May 24 22:09:34.495: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.52536ms) May 24 22:09:34.499: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.598427ms) May 24 22:09:34.501: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.664745ms) May 24 22:09:34.504: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.413835ms) May 24 22:09:34.507: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.765658ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:34.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2190" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":225,"skipped":3830,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:34.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:09:34.589: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:38.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3182" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:38.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 24 22:09:38.813: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9626 /api/v1/namespaces/watch-9626/configmaps/e2e-watch-test-label-changed bdd10ce9-2460-4c06-8ea8-cd554471ad18 18867813 0 2020-05-24 22:09:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 22:09:38.813: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9626 /api/v1/namespaces/watch-9626/configmaps/e2e-watch-test-label-changed bdd10ce9-2460-4c06-8ea8-cd554471ad18 18867814 0 2020-05-24 22:09:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 24 22:09:38.813: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9626 /api/v1/namespaces/watch-9626/configmaps/e2e-watch-test-label-changed bdd10ce9-2460-4c06-8ea8-cd554471ad18 18867815 0 2020-05-24 22:09:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 24 22:09:48.922: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9626 /api/v1/namespaces/watch-9626/configmaps/e2e-watch-test-label-changed bdd10ce9-2460-4c06-8ea8-cd554471ad18 18867870 0 2020-05-24 22:09:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 22:09:48.922: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9626 /api/v1/namespaces/watch-9626/configmaps/e2e-watch-test-label-changed bdd10ce9-2460-4c06-8ea8-cd554471ad18 18867871 0 2020-05-24 22:09:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 24 22:09:48.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9626 /api/v1/namespaces/watch-9626/configmaps/e2e-watch-test-label-changed bdd10ce9-2460-4c06-8ea8-cd554471ad18 18867872 0 2020-05-24 22:09:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:48.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9626" for this suite. • [SLOW TEST:10.186 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":227,"skipped":3856,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:48.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b4a5241a-958b-4632-8788-cdccd44edc60 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b4a5241a-958b-4632-8788-cdccd44edc60 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:09:57.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2448" for this suite. • [SLOW TEST:8.146 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3873,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:09:57.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 22:09:57.638: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 22:09:59.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954997, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954997, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954997, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725954997, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 22:10:02.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:10:15.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8422" for this suite. STEP: Destroying namespace "webhook-8422-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.024 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":229,"skipped":3875,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:10:15.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-53258cee-9bd2-44fe-974a-856da73520ec STEP: Creating a pod to test consume secrets May 24 22:10:15.175: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489" in namespace "projected-1608" to be "success or failure" May 24 22:10:15.195: INFO: Pod "pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489": Phase="Pending", Reason="", readiness=false. Elapsed: 20.387205ms May 24 22:10:17.218: INFO: Pod "pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042958724s May 24 22:10:19.222: INFO: Pod "pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046857903s STEP: Saw pod success May 24 22:10:19.222: INFO: Pod "pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489" satisfied condition "success or failure" May 24 22:10:19.224: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489 container projected-secret-volume-test: STEP: delete the pod May 24 22:10:19.244: INFO: Waiting for pod pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489 to disappear May 24 22:10:19.248: INFO: Pod pod-projected-secrets-34936796-6fbd-4345-bd53-d79e5decc489 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:10:19.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1608" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3879,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:10:19.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:10:30.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3478" for this suite. • [SLOW TEST:11.173 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":231,"skipped":3879,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:10:30.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-2lb9 STEP: Creating a pod to test atomic-volume-subpath May 24 22:10:30.519: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2lb9" in namespace "subpath-5818" to be "success or failure" May 24 22:10:30.559: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Pending", Reason="", readiness=false. Elapsed: 39.184014ms May 24 22:10:32.563: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043486551s May 24 22:10:34.567: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.047573881s May 24 22:10:36.595: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 6.075789129s May 24 22:10:38.599: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 8.079607974s May 24 22:10:40.603: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 10.083633312s May 24 22:10:42.607: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 12.087832044s May 24 22:10:44.649: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 14.129845487s May 24 22:10:46.654: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 16.134128521s May 24 22:10:48.679: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 18.159744425s May 24 22:10:50.683: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 20.163291934s May 24 22:10:52.686: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Running", Reason="", readiness=true. Elapsed: 22.167113143s May 24 22:10:54.691: INFO: Pod "pod-subpath-test-configmap-2lb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.171424116s STEP: Saw pod success May 24 22:10:54.691: INFO: Pod "pod-subpath-test-configmap-2lb9" satisfied condition "success or failure" May 24 22:10:54.693: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-2lb9 container test-container-subpath-configmap-2lb9: STEP: delete the pod May 24 22:10:54.812: INFO: Waiting for pod pod-subpath-test-configmap-2lb9 to disappear May 24 22:10:54.824: INFO: Pod pod-subpath-test-configmap-2lb9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2lb9 May 24 22:10:54.824: INFO: Deleting pod "pod-subpath-test-configmap-2lb9" in namespace "subpath-5818" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:10:54.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5818" for this suite. • [SLOW TEST:24.437 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":232,"skipped":3889,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:10:54.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 24 22:10:54.951: INFO: namespace kubectl-5018 May 24 22:10:54.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5018' May 24 22:10:55.457: INFO: stderr: "" May 24 22:10:55.457: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 24 22:10:56.462: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:10:56.462: INFO: Found 0 / 1 May 24 22:10:57.461: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:10:57.461: INFO: Found 0 / 1 May 24 22:10:58.462: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:10:58.462: INFO: Found 0 / 1 May 24 22:10:59.462: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:10:59.462: INFO: Found 1 / 1 May 24 22:10:59.462: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 22:10:59.466: INFO: Selector matched 1 pods for map[app:agnhost] May 24 22:10:59.466: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 22:10:59.466: INFO: wait on agnhost-master startup in kubectl-5018 May 24 22:10:59.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-b7llq agnhost-master --namespace=kubectl-5018' May 24 22:10:59.575: INFO: stderr: "" May 24 22:10:59.575: INFO: stdout: "Paused\n" STEP: exposing RC May 24 22:10:59.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5018' May 24 22:10:59.728: INFO: stderr: "" May 24 22:10:59.728: INFO: stdout: "service/rm2 exposed\n" May 24 22:10:59.738: INFO: Service rm2 in namespace kubectl-5018 found. STEP: exposing service May 24 22:11:01.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5018' May 24 22:11:01.877: INFO: stderr: "" May 24 22:11:01.877: INFO: stdout: "service/rm3 exposed\n" May 24 22:11:01.886: INFO: Service rm3 in namespace kubectl-5018 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:11:03.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5018" for this suite. • [SLOW TEST:9.032 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":233,"skipped":3898,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:11:03.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 22:11:04.395: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 22:11:06.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 22:11:08.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955064, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 22:11:11.444: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:11:11.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7670-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:11:12.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6154" for this suite. STEP: Destroying namespace "webhook-6154-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.881 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":234,"skipped":3931,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:11:12.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 22:11:20.950: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 22:11:20.959: INFO: Pod pod-with-poststart-http-hook still exists May 24 22:11:22.959: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 22:11:22.964: INFO: Pod pod-with-poststart-http-hook still exists May 24 22:11:24.959: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 22:11:24.963: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:11:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5285" for this suite. • [SLOW TEST:12.189 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3941,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:11:24.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b67eeb9f-cdf7-4117-8d52-ed17f06c87d3 STEP: Creating a pod to test consume secrets May 24 22:11:25.081: INFO: Waiting up to 5m0s for pod "pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a" in namespace "secrets-1809" to be "success or failure" May 24 22:11:25.084: INFO: Pod "pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615655ms May 24 22:11:27.088: INFO: Pod "pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006592094s May 24 22:11:29.092: INFO: Pod "pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011248839s STEP: Saw pod success May 24 22:11:29.092: INFO: Pod "pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a" satisfied condition "success or failure" May 24 22:11:29.095: INFO: Trying to get logs from node jerma-worker pod pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a container secret-volume-test: STEP: delete the pod May 24 22:11:29.122: INFO: Waiting for pod pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a to disappear May 24 22:11:29.126: INFO: Pod pod-secrets-58124d77-7d5f-45fc-a609-dde7fb4e932a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:11:29.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1809" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3958,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:11:29.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-438.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-438.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 22:11:35.379: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.382: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.385: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.388: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.397: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.400: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.402: INFO: Unable to read jessie_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.405: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:35.411: INFO: Lookups using dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local] May 24 22:11:40.417: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.421: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.427: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.431: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.438: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.441: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.443: INFO: Unable to read jessie_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.445: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:40.450: INFO: Lookups using dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local] May 24 22:11:45.417: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.420: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.424: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.453: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.462: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.464: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.467: INFO: Unable to read jessie_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.469: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:45.474: INFO: Lookups using dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local] May 24 22:11:50.416: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.420: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.423: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.426: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.436: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.439: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.443: INFO: Unable to read jessie_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.446: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:50.453: INFO: Lookups using dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local] May 24 22:11:55.417: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.421: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.424: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.427: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.459: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.462: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.465: INFO: Unable to read jessie_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.468: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:11:55.475: INFO: Lookups using dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local] May 24 22:12:00.417: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.420: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.424: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.427: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.439: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.441: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.444: INFO: Unable to read jessie_udp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.446: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local from pod dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad: the server could not find the requested resource (get pods dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad) May 24 22:12:00.451: INFO: Lookups using dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local wheezy_udp@dns-test-service-2.dns-438.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-438.svc.cluster.local jessie_udp@dns-test-service-2.dns-438.svc.cluster.local jessie_tcp@dns-test-service-2.dns-438.svc.cluster.local] May 24 22:12:05.464: INFO: DNS probes using dns-438/dns-test-eeb8f52c-5934-4321-b5c0-4daa9fc047ad succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:12:06.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-438" for this suite. • [SLOW TEST:37.129 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":237,"skipped":3960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:12:06.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6384 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-6384 May 24 22:12:06.409: INFO: Found 0 stateful pods, waiting for 1 May 24 22:12:16.414: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 24 22:12:16.434: INFO: Deleting all statefulset in ns statefulset-6384 May 24 22:12:16.440: INFO: Scaling statefulset ss to 0 May 24 22:12:36.529: INFO: Waiting for statefulset status.replicas updated to 0 May 24 22:12:36.532: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:12:36.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6384" for this suite. • [SLOW TEST:30.320 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":238,"skipped":3995,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:12:36.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1195.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1195.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1195.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1195.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1195.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1195.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 22:12:42.746: INFO: DNS probes using dns-1195/dns-test-8ddd2d1d-1855-45c1-8340-4d3d982caf8e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:12:42.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1195" for this suite. • [SLOW TEST:6.280 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":239,"skipped":4011,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:12:42.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6b60ee08-3b1b-4e98-aef9-f3099551ad28 STEP: Creating a pod to test consume configMaps May 24 22:12:43.232: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8" in namespace "projected-8520" to be "success or failure" May 24 22:12:43.274: INFO: Pod "pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8": Phase="Pending", Reason="", readiness=false. Elapsed: 41.708194ms May 24 22:12:45.278: INFO: Pod "pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045993802s May 24 22:12:47.282: INFO: Pod "pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050342441s STEP: Saw pod success May 24 22:12:47.282: INFO: Pod "pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8" satisfied condition "success or failure" May 24 22:12:47.285: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8 container projected-configmap-volume-test: STEP: delete the pod May 24 22:12:47.375: INFO: Waiting for pod pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8 to disappear May 24 22:12:47.386: INFO: Pod pod-projected-configmaps-a0e8cd53-3070-4e47-a0ad-401c2a4a79e8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:12:47.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8520" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4017,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:12:47.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:12:47.478: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:12:51.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8964" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4021,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:12:51.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 24 22:12:51.632: INFO: PodSpec: initContainers in spec.initContainers May 24 22:13:40.364: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f5765ebf-1f4a-451a-bd9b-a1afedfabb57", GenerateName:"", Namespace:"init-container-4665", SelfLink:"/api/v1/namespaces/init-container-4665/pods/pod-init-f5765ebf-1f4a-451a-bd9b-a1afedfabb57", UID:"ef3e5c35-9b40-4b0b-927e-32f08f775249", ResourceVersion:"18869183", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725955171, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"632179992"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-l699p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005a89700), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l699p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l699p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l699p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ebde20), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021b8180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ebdeb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ebded0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002ebded8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002ebdedc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.36", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.36"}}, StartTime:(*v1.Time)(0xc002890ec0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001a4ed90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001a4ee00)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://31319e9964188850c9a402ef090825f45473ae4f239b1239543a841acebeecdc", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002890f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002890ee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002ebdf5f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:13:40.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4665" for this suite. • [SLOW TEST:48.926 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":242,"skipped":4024,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:13:40.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 22:13:40.554: INFO: Waiting up to 5m0s for pod "pod-10b01676-e17e-4718-8cf9-e092362460c9" in namespace "emptydir-5363" to be "success or failure" May 24 22:13:40.567: INFO: Pod "pod-10b01676-e17e-4718-8cf9-e092362460c9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.058712ms May 24 22:13:42.571: INFO: Pod "pod-10b01676-e17e-4718-8cf9-e092362460c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016682584s May 24 22:13:44.575: INFO: Pod "pod-10b01676-e17e-4718-8cf9-e092362460c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021057423s STEP: Saw pod success May 24 22:13:44.575: INFO: Pod "pod-10b01676-e17e-4718-8cf9-e092362460c9" satisfied condition "success or failure" May 24 22:13:44.579: INFO: Trying to get logs from node jerma-worker pod pod-10b01676-e17e-4718-8cf9-e092362460c9 container test-container: STEP: delete the pod May 24 22:13:44.626: INFO: Waiting for pod pod-10b01676-e17e-4718-8cf9-e092362460c9 to disappear May 24 22:13:44.657: INFO: Pod pod-10b01676-e17e-4718-8cf9-e092362460c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:13:44.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5363" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4046,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:13:44.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 in namespace container-probe-6974 May 24 22:13:48.799: INFO: Started pod liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 in namespace container-probe-6974 STEP: checking the pod's current state and verifying that restartCount is present May 24 22:13:48.801: INFO: Initial restart count of pod liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 is 0 May 24 22:14:04.841: INFO: Restart count of pod container-probe-6974/liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 is now 1 (16.039481949s elapsed) May 24 22:14:26.886: INFO: Restart count of pod container-probe-6974/liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 is now 2 (38.084585367s elapsed) May 24 22:14:45.088: INFO: Restart count of pod container-probe-6974/liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 is now 3 (56.286682709s elapsed) May 24 22:15:05.153: INFO: Restart count of pod container-probe-6974/liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 is now 4 (1m16.351534801s elapsed) May 24 22:16:15.302: INFO: Restart count of pod container-probe-6974/liveness-e83b9842-3d7a-4872-9cf6-54a1c5eb1af9 is now 5 (2m26.500863058s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:16:15.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6974" for this suite. • [SLOW TEST:150.727 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4058,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:16:15.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-9ea9eec4-3316-4d9b-b526-819822eeff7a STEP: Creating a pod to test consume configMaps May 24 22:16:15.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26" in namespace "projected-4832" to be "success or failure" May 24 22:16:15.804: INFO: Pod "pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26": Phase="Pending", Reason="", readiness=false. Elapsed: 168.835526ms May 24 22:16:17.808: INFO: Pod "pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173396491s May 24 22:16:19.812: INFO: Pod "pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177588791s STEP: Saw pod success May 24 22:16:19.812: INFO: Pod "pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26" satisfied condition "success or failure" May 24 22:16:19.816: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26 container projected-configmap-volume-test: STEP: delete the pod May 24 22:16:19.851: INFO: Waiting for pod pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26 to disappear May 24 22:16:20.104: INFO: Pod pod-projected-configmaps-559b1264-2183-466e-860d-94d523cbbe26 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:16:20.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4832" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4071,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:16:20.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 22:16:20.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a" in namespace "downward-api-7817" to be "success or failure" May 24 22:16:20.289: INFO: Pod "downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.548953ms May 24 22:16:22.293: INFO: Pod "downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007212814s May 24 22:16:24.297: INFO: Pod "downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010978442s STEP: Saw pod success May 24 22:16:24.297: INFO: Pod "downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a" satisfied condition "success or failure" May 24 22:16:24.300: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a container client-container: STEP: delete the pod May 24 22:16:24.319: INFO: Waiting for pod downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a to disappear May 24 22:16:24.330: INFO: Pod downwardapi-volume-fd6d21ce-22be-4602-a53d-28f05ad8696a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:16:24.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7817" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4075,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:16:24.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6d21b0c1-798d-47d9-a09f-0de687dcf595 STEP: Creating a pod to test consume configMaps May 24 22:16:24.470: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66" in namespace "projected-5048" to be "success or failure" May 24 22:16:24.523: INFO: Pod "pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66": Phase="Pending", Reason="", readiness=false. Elapsed: 52.203997ms May 24 22:16:26.527: INFO: Pod "pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056469031s May 24 22:16:28.531: INFO: Pod "pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061137533s STEP: Saw pod success May 24 22:16:28.532: INFO: Pod "pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66" satisfied condition "success or failure" May 24 22:16:28.534: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66 container projected-configmap-volume-test: STEP: delete the pod May 24 22:16:28.734: INFO: Waiting for pod pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66 to disappear May 24 22:16:28.739: INFO: Pod pod-projected-configmaps-799c5135-b6c6-4e90-8e09-9a0e205f4b66 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:16:28.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5048" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:16:28.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 24 22:16:28.812: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:16:39.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-598" for this suite. • [SLOW TEST:10.513 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4156,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:16:39.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:16:55.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9870" for this suite. • [SLOW TEST:16.200 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":249,"skipped":4158,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:16:55.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 24 22:16:59.567: INFO: &Pod{ObjectMeta:{send-events-47f42c46-df93-4654-9c6f-9079fc8207a9 events-569 /api/v1/namespaces/events-569/pods/send-events-47f42c46-df93-4654-9c6f-9079fc8207a9 ba3620f3-564f-44f6-90fc-9adec408b986 18870007 0 2020-05-24 22:16:55 +0000 UTC map[name:foo time:514691947] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rv5cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rv5cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rv5cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 22:16:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 22:16:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 22:16:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 22:16:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.241,StartTime:2020-05-24 22:16:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 22:16:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://caa5825d634f821f199ef566d94093b7a134cf68fc105b65da7c3bf5e3be0ed9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 24 22:17:01.571: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 24 22:17:03.576: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:17:03.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-569" for this suite. • [SLOW TEST:8.162 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":250,"skipped":4163,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:17:03.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 22:17:11.811: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 22:17:11.816: INFO: Pod pod-with-prestop-http-hook still exists May 24 22:17:13.816: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 22:17:13.820: INFO: Pod pod-with-prestop-http-hook still exists May 24 22:17:15.816: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 22:17:15.821: INFO: Pod pod-with-prestop-http-hook still exists May 24 22:17:17.816: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 22:17:17.820: INFO: Pod pod-with-prestop-http-hook still exists May 24 22:17:19.816: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 22:17:19.820: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:17:19.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7849" for this suite. • [SLOW TEST:16.213 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:17:19.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5852 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5852 I0524 22:17:20.080909 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5852, replica count: 2 I0524 22:17:23.131978 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 22:17:26.132278 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 22:17:26.132: INFO: Creating new exec pod May 24 22:17:31.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5852 execpodpcc4v -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 24 22:17:35.111: INFO: stderr: "I0524 22:17:35.027071 3202 log.go:172] (0xc000870bb0) (0xc0006c4780) Create stream\nI0524 22:17:35.027101 3202 log.go:172] (0xc000870bb0) (0xc0006c4780) Stream added, broadcasting: 1\nI0524 22:17:35.028966 3202 log.go:172] (0xc000870bb0) Reply frame received for 1\nI0524 22:17:35.029011 3202 log.go:172] (0xc000870bb0) (0xc000692000) Create stream\nI0524 22:17:35.029021 3202 log.go:172] (0xc000870bb0) (0xc000692000) Stream added, broadcasting: 3\nI0524 22:17:35.029958 3202 log.go:172] (0xc000870bb0) Reply frame received for 3\nI0524 22:17:35.029987 3202 log.go:172] (0xc000870bb0) (0xc0006ce000) Create stream\nI0524 22:17:35.029996 3202 log.go:172] (0xc000870bb0) (0xc0006ce000) Stream added, broadcasting: 5\nI0524 22:17:35.030682 3202 log.go:172] (0xc000870bb0) Reply frame received for 5\nI0524 22:17:35.103672 3202 log.go:172] (0xc000870bb0) Data frame received for 5\nI0524 22:17:35.103702 3202 log.go:172] (0xc0006ce000) (5) Data frame handling\nI0524 22:17:35.103720 3202 log.go:172] (0xc0006ce000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0524 22:17:35.103758 3202 log.go:172] (0xc000870bb0) Data frame received for 5\nI0524 22:17:35.103767 3202 log.go:172] (0xc0006ce000) (5) Data frame handling\nI0524 22:17:35.103775 3202 log.go:172] (0xc0006ce000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0524 22:17:35.104059 3202 log.go:172] (0xc000870bb0) Data frame received for 3\nI0524 22:17:35.104077 3202 log.go:172] (0xc000692000) (3) Data frame handling\nI0524 22:17:35.104098 3202 log.go:172] (0xc000870bb0) Data frame received for 5\nI0524 22:17:35.104118 3202 log.go:172] (0xc0006ce000) (5) Data frame handling\nI0524 22:17:35.105842 3202 log.go:172] (0xc000870bb0) Data frame received for 1\nI0524 22:17:35.105873 3202 log.go:172] (0xc0006c4780) (1) Data frame handling\nI0524 22:17:35.105884 3202 log.go:172] (0xc0006c4780) (1) Data frame sent\nI0524 22:17:35.105895 3202 log.go:172] (0xc000870bb0) (0xc0006c4780) Stream removed, broadcasting: 1\nI0524 22:17:35.105912 3202 log.go:172] (0xc000870bb0) Go away received\nI0524 22:17:35.106241 3202 log.go:172] (0xc000870bb0) (0xc0006c4780) Stream removed, broadcasting: 1\nI0524 22:17:35.106256 3202 log.go:172] (0xc000870bb0) (0xc000692000) Stream removed, broadcasting: 3\nI0524 22:17:35.106262 3202 log.go:172] (0xc000870bb0) (0xc0006ce000) Stream removed, broadcasting: 5\n" May 24 22:17:35.111: INFO: stdout: "" May 24 22:17:35.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5852 execpodpcc4v -- /bin/sh -x -c nc -zv -t -w 2 10.107.93.236 80' May 24 22:17:35.352: INFO: stderr: "I0524 22:17:35.256863 3236 log.go:172] (0xc000ab00b0) (0xc0002f3400) Create stream\nI0524 22:17:35.256917 3236 log.go:172] (0xc000ab00b0) (0xc0002f3400) Stream added, broadcasting: 1\nI0524 22:17:35.259548 3236 log.go:172] (0xc000ab00b0) Reply frame received for 1\nI0524 22:17:35.259602 3236 log.go:172] (0xc000ab00b0) (0xc0006bb9a0) Create stream\nI0524 22:17:35.259620 3236 log.go:172] (0xc000ab00b0) (0xc0006bb9a0) Stream added, broadcasting: 3\nI0524 22:17:35.260653 3236 log.go:172] (0xc000ab00b0) Reply frame received for 3\nI0524 22:17:35.260700 3236 log.go:172] (0xc000ab00b0) (0xc000a4e000) Create stream\nI0524 22:17:35.260713 3236 log.go:172] (0xc000ab00b0) (0xc000a4e000) Stream added, broadcasting: 5\nI0524 22:17:35.261960 3236 log.go:172] (0xc000ab00b0) Reply frame received for 5\nI0524 22:17:35.343796 3236 log.go:172] (0xc000ab00b0) Data frame received for 5\nI0524 22:17:35.343868 3236 log.go:172] (0xc000a4e000) (5) Data frame handling\nI0524 22:17:35.343884 3236 log.go:172] (0xc000a4e000) (5) Data frame sent\nI0524 22:17:35.343894 3236 log.go:172] (0xc000ab00b0) Data frame received for 5\nI0524 22:17:35.343902 3236 log.go:172] (0xc000a4e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.93.236 80\nConnection to 10.107.93.236 80 port [tcp/http] succeeded!\nI0524 22:17:35.343921 3236 log.go:172] (0xc000ab00b0) Data frame received for 3\nI0524 22:17:35.343949 3236 log.go:172] (0xc0006bb9a0) (3) Data frame handling\nI0524 22:17:35.343969 3236 log.go:172] (0xc000a4e000) (5) Data frame sent\nI0524 22:17:35.343979 3236 log.go:172] (0xc000ab00b0) Data frame received for 5\nI0524 22:17:35.343987 3236 log.go:172] (0xc000a4e000) (5) Data frame handling\nI0524 22:17:35.345736 3236 log.go:172] (0xc000ab00b0) Data frame received for 1\nI0524 22:17:35.345767 3236 log.go:172] (0xc0002f3400) (1) Data frame handling\nI0524 22:17:35.345795 3236 log.go:172] (0xc0002f3400) (1) Data frame sent\nI0524 22:17:35.345816 3236 log.go:172] (0xc000ab00b0) (0xc0002f3400) Stream removed, broadcasting: 1\nI0524 22:17:35.345906 3236 log.go:172] (0xc000ab00b0) Go away received\nI0524 22:17:35.346332 3236 log.go:172] (0xc000ab00b0) (0xc0002f3400) Stream removed, broadcasting: 1\nI0524 22:17:35.346354 3236 log.go:172] (0xc000ab00b0) (0xc0006bb9a0) Stream removed, broadcasting: 3\nI0524 22:17:35.346363 3236 log.go:172] (0xc000ab00b0) (0xc000a4e000) Stream removed, broadcasting: 5\n" May 24 22:17:35.352: INFO: stdout: "" May 24 22:17:35.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5852 execpodpcc4v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30042' May 24 22:17:35.543: INFO: stderr: "I0524 22:17:35.482377 3258 log.go:172] (0xc0002286e0) (0xc0006fdc20) Create stream\nI0524 22:17:35.482464 3258 log.go:172] (0xc0002286e0) (0xc0006fdc20) Stream added, broadcasting: 1\nI0524 22:17:35.485004 3258 log.go:172] (0xc0002286e0) Reply frame received for 1\nI0524 22:17:35.485088 3258 log.go:172] (0xc0002286e0) (0xc0006fdcc0) Create stream\nI0524 22:17:35.485305 3258 log.go:172] (0xc0002286e0) (0xc0006fdcc0) Stream added, broadcasting: 3\nI0524 22:17:35.486414 3258 log.go:172] (0xc0002286e0) Reply frame received for 3\nI0524 22:17:35.486444 3258 log.go:172] (0xc0002286e0) (0xc000b70000) Create stream\nI0524 22:17:35.486452 3258 log.go:172] (0xc0002286e0) (0xc000b70000) Stream added, broadcasting: 5\nI0524 22:17:35.487540 3258 log.go:172] (0xc0002286e0) Reply frame received for 5\nI0524 22:17:35.535888 3258 log.go:172] (0xc0002286e0) Data frame received for 5\nI0524 22:17:35.535916 3258 log.go:172] (0xc000b70000) (5) Data frame handling\nI0524 22:17:35.535935 3258 log.go:172] (0xc000b70000) (5) Data frame sent\nI0524 22:17:35.535944 3258 log.go:172] (0xc0002286e0) Data frame received for 5\nI0524 22:17:35.535951 3258 log.go:172] (0xc000b70000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30042\nConnection to 172.17.0.10 30042 port [tcp/30042] succeeded!\nI0524 22:17:35.535973 3258 log.go:172] (0xc000b70000) (5) Data frame sent\nI0524 22:17:35.536161 3258 log.go:172] (0xc0002286e0) Data frame received for 3\nI0524 22:17:35.536178 3258 log.go:172] (0xc0006fdcc0) (3) Data frame handling\nI0524 22:17:35.536338 3258 log.go:172] (0xc0002286e0) Data frame received for 5\nI0524 22:17:35.536362 3258 log.go:172] (0xc000b70000) (5) Data frame handling\nI0524 22:17:35.537914 3258 log.go:172] (0xc0002286e0) Data frame received for 1\nI0524 22:17:35.537947 3258 log.go:172] (0xc0006fdc20) (1) Data frame handling\nI0524 22:17:35.537966 3258 log.go:172] (0xc0006fdc20) (1) Data frame sent\nI0524 22:17:35.537984 3258 log.go:172] (0xc0002286e0) (0xc0006fdc20) Stream removed, broadcasting: 1\nI0524 22:17:35.538157 3258 log.go:172] (0xc0002286e0) Go away received\nI0524 22:17:35.538338 3258 log.go:172] (0xc0002286e0) (0xc0006fdc20) Stream removed, broadcasting: 1\nI0524 22:17:35.538362 3258 log.go:172] (0xc0002286e0) (0xc0006fdcc0) Stream removed, broadcasting: 3\nI0524 22:17:35.538372 3258 log.go:172] (0xc0002286e0) (0xc000b70000) Stream removed, broadcasting: 5\n" May 24 22:17:35.543: INFO: stdout: "" May 24 22:17:35.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5852 execpodpcc4v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30042' May 24 22:17:35.766: INFO: stderr: "I0524 22:17:35.663500 3280 log.go:172] (0xc000b06370) (0xc00066df40) Create stream\nI0524 22:17:35.663566 3280 log.go:172] (0xc000b06370) (0xc00066df40) Stream added, broadcasting: 1\nI0524 22:17:35.668786 3280 log.go:172] (0xc000b06370) Reply frame received for 1\nI0524 22:17:35.668839 3280 log.go:172] (0xc000b06370) (0xc0009780a0) Create stream\nI0524 22:17:35.668854 3280 log.go:172] (0xc000b06370) (0xc0009780a0) Stream added, broadcasting: 3\nI0524 22:17:35.669866 3280 log.go:172] (0xc000b06370) Reply frame received for 3\nI0524 22:17:35.669908 3280 log.go:172] (0xc000b06370) (0xc000abe000) Create stream\nI0524 22:17:35.669920 3280 log.go:172] (0xc000b06370) (0xc000abe000) Stream added, broadcasting: 5\nI0524 22:17:35.670745 3280 log.go:172] (0xc000b06370) Reply frame received for 5\nI0524 22:17:35.760438 3280 log.go:172] (0xc000b06370) Data frame received for 5\nI0524 22:17:35.760463 3280 log.go:172] (0xc000abe000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30042\nConnection to 172.17.0.8 30042 port [tcp/30042] succeeded!\nI0524 22:17:35.760486 3280 log.go:172] (0xc000b06370) Data frame received for 3\nI0524 22:17:35.760518 3280 log.go:172] (0xc0009780a0) (3) Data frame handling\nI0524 22:17:35.760547 3280 log.go:172] (0xc000abe000) (5) Data frame sent\nI0524 22:17:35.760559 3280 log.go:172] (0xc000b06370) Data frame received for 5\nI0524 22:17:35.760565 3280 log.go:172] (0xc000abe000) (5) Data frame handling\nI0524 22:17:35.761739 3280 log.go:172] (0xc000b06370) Data frame received for 1\nI0524 22:17:35.761767 3280 log.go:172] (0xc00066df40) (1) Data frame handling\nI0524 22:17:35.761781 3280 log.go:172] (0xc00066df40) (1) Data frame sent\nI0524 22:17:35.761800 3280 log.go:172] (0xc000b06370) (0xc00066df40) Stream removed, broadcasting: 1\nI0524 22:17:35.761834 3280 log.go:172] (0xc000b06370) Go away received\nI0524 22:17:35.762109 3280 log.go:172] (0xc000b06370) (0xc00066df40) Stream removed, broadcasting: 1\nI0524 22:17:35.762121 3280 log.go:172] (0xc000b06370) (0xc0009780a0) Stream removed, broadcasting: 3\nI0524 22:17:35.762127 3280 log.go:172] (0xc000b06370) (0xc000abe000) Stream removed, broadcasting: 5\n" May 24 22:17:35.767: INFO: stdout: "" May 24 22:17:35.767: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:17:35.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5852" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.975 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":252,"skipped":4208,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:17:35.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:17:35.904: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 24 22:17:37.987: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:17:39.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6713" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":253,"skipped":4216,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:17:39.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:17:40.009: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 22:17:43.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1545 create -f -' May 24 22:17:47.857: INFO: stderr: "" May 24 22:17:47.858: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 24 22:17:47.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1545 delete e2e-test-crd-publish-openapi-6876-crds test-cr' May 24 22:17:47.965: INFO: stderr: "" May 24 22:17:47.965: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 24 22:17:47.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1545 apply -f -' May 24 22:17:48.207: INFO: stderr: "" May 24 22:17:48.207: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 24 22:17:48.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1545 delete e2e-test-crd-publish-openapi-6876-crds test-cr' May 24 22:17:48.333: INFO: stderr: "" May 24 22:17:48.333: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 24 22:17:48.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6876-crds' May 24 22:17:48.580: INFO: stderr: "" May 24 22:17:48.580: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6876-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:17:51.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1545" for this suite. • [SLOW TEST:12.169 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":254,"skipped":4221,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:17:51.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1060 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1060;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1060 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1060;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1060.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1060.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1060.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1060.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1060.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1060.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.83.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.83.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.83.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.83.140_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1060 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1060;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1060 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1060;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1060.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1060.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1060.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1060.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1060.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1060.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1060.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1060.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.83.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.83.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.83.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.83.140_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 22:17:57.717: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.719: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.722: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.724: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.726: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.729: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.731: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.734: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.774: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.776: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.779: INFO: Unable to read jessie_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.781: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.784: INFO: Unable to read jessie_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.786: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.789: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.792: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:17:57.811: INFO: Lookups using dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1060 wheezy_tcp@dns-test-service.dns-1060 wheezy_udp@dns-test-service.dns-1060.svc wheezy_tcp@dns-test-service.dns-1060.svc wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1060 jessie_tcp@dns-test-service.dns-1060 jessie_udp@dns-test-service.dns-1060.svc jessie_tcp@dns-test-service.dns-1060.svc jessie_udp@_http._tcp.dns-test-service.dns-1060.svc jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc] May 24 22:18:02.816: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.819: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.822: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.824: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.827: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.834: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.858: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.861: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.864: INFO: Unable to read jessie_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.867: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.870: INFO: Unable to read jessie_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.872: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.876: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.879: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:02.900: INFO: Lookups using dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1060 wheezy_tcp@dns-test-service.dns-1060 wheezy_udp@dns-test-service.dns-1060.svc wheezy_tcp@dns-test-service.dns-1060.svc wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1060 jessie_tcp@dns-test-service.dns-1060 jessie_udp@dns-test-service.dns-1060.svc jessie_tcp@dns-test-service.dns-1060.svc jessie_udp@_http._tcp.dns-test-service.dns-1060.svc jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc] May 24 22:18:07.817: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.821: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.832: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.837: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.841: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.845: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.865: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.868: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.871: INFO: Unable to read jessie_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.875: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.878: INFO: Unable to read jessie_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.881: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.884: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.888: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:07.908: INFO: Lookups using dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1060 wheezy_tcp@dns-test-service.dns-1060 wheezy_udp@dns-test-service.dns-1060.svc wheezy_tcp@dns-test-service.dns-1060.svc wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1060 jessie_tcp@dns-test-service.dns-1060 jessie_udp@dns-test-service.dns-1060.svc jessie_tcp@dns-test-service.dns-1060.svc jessie_udp@_http._tcp.dns-test-service.dns-1060.svc jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc] May 24 22:18:12.816: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.820: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.848: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.857: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.863: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.866: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.870: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.893: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.896: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.899: INFO: Unable to read jessie_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.903: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.906: INFO: Unable to read jessie_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.909: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.912: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.915: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:12.936: INFO: Lookups using dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1060 wheezy_tcp@dns-test-service.dns-1060 wheezy_udp@dns-test-service.dns-1060.svc wheezy_tcp@dns-test-service.dns-1060.svc wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1060 jessie_tcp@dns-test-service.dns-1060 jessie_udp@dns-test-service.dns-1060.svc jessie_tcp@dns-test-service.dns-1060.svc jessie_udp@_http._tcp.dns-test-service.dns-1060.svc jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc] May 24 22:18:17.815: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.819: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.822: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.842: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.845: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.849: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.852: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.855: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.878: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.881: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.884: INFO: Unable to read jessie_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.887: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.890: INFO: Unable to read jessie_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.893: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.896: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.899: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:17.918: INFO: Lookups using dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1060 wheezy_tcp@dns-test-service.dns-1060 wheezy_udp@dns-test-service.dns-1060.svc wheezy_tcp@dns-test-service.dns-1060.svc wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1060 jessie_tcp@dns-test-service.dns-1060 jessie_udp@dns-test-service.dns-1060.svc jessie_tcp@dns-test-service.dns-1060.svc jessie_udp@_http._tcp.dns-test-service.dns-1060.svc jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc] May 24 22:18:22.816: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.819: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.822: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.824: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.827: INFO: Unable to read wheezy_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.854: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.856: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.858: INFO: Unable to read jessie_udp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060 from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.863: INFO: Unable to read jessie_udp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.866: INFO: Unable to read jessie_tcp@dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.868: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc from pod dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0: the server could not find the requested resource (get pods dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0) May 24 22:18:22.889: INFO: Lookups using dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1060 wheezy_tcp@dns-test-service.dns-1060 wheezy_udp@dns-test-service.dns-1060.svc wheezy_tcp@dns-test-service.dns-1060.svc wheezy_udp@_http._tcp.dns-test-service.dns-1060.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1060.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1060 jessie_tcp@dns-test-service.dns-1060 jessie_udp@dns-test-service.dns-1060.svc jessie_tcp@dns-test-service.dns-1060.svc jessie_udp@_http._tcp.dns-test-service.dns-1060.svc jessie_tcp@_http._tcp.dns-test-service.dns-1060.svc] May 24 22:18:27.898: INFO: DNS probes using dns-1060/dns-test-8c3bfaf4-5ee1-4709-9dab-2bf5e37b07e0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:18:28.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1060" for this suite. • [SLOW TEST:37.217 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":255,"skipped":4232,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:18:28.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 24 22:18:28.765: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:18:37.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8389" for this suite. • [SLOW TEST:8.957 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":256,"skipped":4254,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:18:37.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 22:18:47.819: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 22:18:47.878: INFO: Pod pod-with-poststart-exec-hook still exists May 24 22:18:49.878: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 22:18:49.883: INFO: Pod pod-with-poststart-exec-hook still exists May 24 22:18:51.878: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 22:18:51.882: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:18:51.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2083" for this suite. • [SLOW TEST:14.249 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:18:51.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 22:18:51.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1080' May 24 22:18:52.080: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 22:18:52.080: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 24 22:18:52.100: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-4qfh8] May 24 22:18:52.100: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-4qfh8" in namespace "kubectl-1080" to be "running and ready" May 24 22:18:52.122: INFO: Pod "e2e-test-httpd-rc-4qfh8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.353907ms May 24 22:18:54.126: INFO: Pod "e2e-test-httpd-rc-4qfh8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026368357s May 24 22:18:56.130: INFO: Pod "e2e-test-httpd-rc-4qfh8": Phase="Running", Reason="", readiness=true. Elapsed: 4.029714981s May 24 22:18:56.130: INFO: Pod "e2e-test-httpd-rc-4qfh8" satisfied condition "running and ready" May 24 22:18:56.130: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-4qfh8] May 24 22:18:56.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1080' May 24 22:18:56.258: INFO: stderr: "" May 24 22:18:56.258: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.47. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.47. Set the 'ServerName' directive globally to suppress this message\n[Sun May 24 22:18:54.634951 2020] [mpm_event:notice] [pid 1:tid 140195429956456] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun May 24 22:18:54.635001 2020] [core:notice] [pid 1:tid 140195429956456] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 24 22:18:56.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1080' May 24 22:18:56.355: INFO: stderr: "" May 24 22:18:56.355: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:18:56.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1080" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":258,"skipped":4301,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:18:56.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:19:14.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9993" for this suite. • [SLOW TEST:18.065 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":259,"skipped":4306,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:19:14.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 24 22:19:14.503: INFO: Waiting up to 5m0s for pod "client-containers-5f944501-9612-4f6e-b686-e7e429d422e4" in namespace "containers-6894" to be "success or failure" May 24 22:19:14.520: INFO: Pod "client-containers-5f944501-9612-4f6e-b686-e7e429d422e4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.083046ms May 24 22:19:16.524: INFO: Pod "client-containers-5f944501-9612-4f6e-b686-e7e429d422e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021201206s May 24 22:19:18.529: INFO: Pod "client-containers-5f944501-9612-4f6e-b686-e7e429d422e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026036267s STEP: Saw pod success May 24 22:19:18.529: INFO: Pod "client-containers-5f944501-9612-4f6e-b686-e7e429d422e4" satisfied condition "success or failure" May 24 22:19:18.532: INFO: Trying to get logs from node jerma-worker2 pod client-containers-5f944501-9612-4f6e-b686-e7e429d422e4 container test-container: STEP: delete the pod May 24 22:19:18.570: INFO: Waiting for pod client-containers-5f944501-9612-4f6e-b686-e7e429d422e4 to disappear May 24 22:19:18.608: INFO: Pod client-containers-5f944501-9612-4f6e-b686-e7e429d422e4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:19:18.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6894" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:19:18.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 22:19:18.702: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67" in namespace "downward-api-2669" to be "success or failure" May 24 22:19:18.771: INFO: Pod "downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67": Phase="Pending", Reason="", readiness=false. Elapsed: 69.83776ms May 24 22:19:20.775: INFO: Pod "downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073843372s May 24 22:19:22.780: INFO: Pod "downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67": Phase="Running", Reason="", readiness=true. Elapsed: 4.078433113s May 24 22:19:24.784: INFO: Pod "downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082790505s STEP: Saw pod success May 24 22:19:24.784: INFO: Pod "downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67" satisfied condition "success or failure" May 24 22:19:24.788: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67 container client-container: STEP: delete the pod May 24 22:19:24.874: INFO: Waiting for pod downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67 to disappear May 24 22:19:24.882: INFO: Pod downwardapi-volume-90b89fa1-7c8a-4209-b94f-1e8ccc565c67 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:19:24.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2669" for this suite. • [SLOW TEST:6.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4372,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:19:24.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:19:24.955: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:19:26.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9408" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":262,"skipped":4383,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:19:26.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 24 22:19:26.125: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:19:39.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1824" for this suite. • [SLOW TEST:13.709 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":263,"skipped":4390,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:19:39.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8836 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8836 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8836 May 24 22:19:39.897: INFO: Found 0 stateful pods, waiting for 1 May 24 22:19:49.903: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 24 22:19:49.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 22:19:50.174: INFO: stderr: "I0524 22:19:50.035651 3476 log.go:172] (0xc000a048f0) (0xc0009b4000) Create stream\nI0524 22:19:50.035709 3476 log.go:172] (0xc000a048f0) (0xc0009b4000) Stream added, broadcasting: 1\nI0524 22:19:50.038865 3476 log.go:172] (0xc000a048f0) Reply frame received for 1\nI0524 22:19:50.038916 3476 log.go:172] (0xc000a048f0) (0xc0009b40a0) Create stream\nI0524 22:19:50.038931 3476 log.go:172] (0xc000a048f0) (0xc0009b40a0) Stream added, broadcasting: 3\nI0524 22:19:50.039916 3476 log.go:172] (0xc000a048f0) Reply frame received for 3\nI0524 22:19:50.039935 3476 log.go:172] (0xc000a048f0) (0xc000709a40) Create stream\nI0524 22:19:50.039944 3476 log.go:172] (0xc000a048f0) (0xc000709a40) Stream added, broadcasting: 5\nI0524 22:19:50.040965 3476 log.go:172] (0xc000a048f0) Reply frame received for 5\nI0524 22:19:50.119938 3476 log.go:172] (0xc000a048f0) Data frame received for 5\nI0524 22:19:50.119965 3476 log.go:172] (0xc000709a40) (5) Data frame handling\nI0524 22:19:50.119980 3476 log.go:172] (0xc000709a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 22:19:50.167301 3476 log.go:172] (0xc000a048f0) Data frame received for 3\nI0524 22:19:50.167356 3476 log.go:172] (0xc0009b40a0) (3) Data frame handling\nI0524 22:19:50.167380 3476 log.go:172] (0xc0009b40a0) (3) Data frame sent\nI0524 22:19:50.167393 3476 log.go:172] (0xc000a048f0) Data frame received for 3\nI0524 22:19:50.167403 3476 log.go:172] (0xc0009b40a0) (3) Data frame handling\nI0524 22:19:50.167538 3476 log.go:172] (0xc000a048f0) Data frame received for 5\nI0524 22:19:50.167574 3476 log.go:172] (0xc000709a40) (5) Data frame handling\nI0524 22:19:50.169352 3476 log.go:172] (0xc000a048f0) Data frame received for 1\nI0524 22:19:50.169385 3476 log.go:172] (0xc0009b4000) (1) Data frame handling\nI0524 22:19:50.169427 3476 log.go:172] (0xc0009b4000) (1) Data frame sent\nI0524 22:19:50.169457 3476 log.go:172] (0xc000a048f0) (0xc0009b4000) Stream removed, broadcasting: 1\nI0524 22:19:50.169560 3476 log.go:172] (0xc000a048f0) Go away received\nI0524 22:19:50.169825 3476 log.go:172] (0xc000a048f0) (0xc0009b4000) Stream removed, broadcasting: 1\nI0524 22:19:50.169844 3476 log.go:172] (0xc000a048f0) (0xc0009b40a0) Stream removed, broadcasting: 3\nI0524 22:19:50.169857 3476 log.go:172] (0xc000a048f0) (0xc000709a40) Stream removed, broadcasting: 5\n" May 24 22:19:50.175: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 22:19:50.175: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 22:19:50.178: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 22:20:00.183: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 22:20:00.183: INFO: Waiting for statefulset status.replicas updated to 0 May 24 22:20:00.201: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999353s May 24 22:20:01.206: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99227299s May 24 22:20:02.211: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987395575s May 24 22:20:03.216: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982504378s May 24 22:20:04.221: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977185197s May 24 22:20:05.225: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972401509s May 24 22:20:06.230: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968385987s May 24 22:20:07.233: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.96361845s May 24 22:20:08.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96013247s May 24 22:20:09.242: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.616134ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8836 May 24 22:20:10.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 22:20:10.483: INFO: stderr: "I0524 22:20:10.379347 3499 log.go:172] (0xc0008e48f0) (0xc00097e000) Create stream\nI0524 22:20:10.379403 3499 log.go:172] (0xc0008e48f0) (0xc00097e000) Stream added, broadcasting: 1\nI0524 22:20:10.381833 3499 log.go:172] (0xc0008e48f0) Reply frame received for 1\nI0524 22:20:10.381889 3499 log.go:172] (0xc0008e48f0) (0xc00097e0a0) Create stream\nI0524 22:20:10.381903 3499 log.go:172] (0xc0008e48f0) (0xc00097e0a0) Stream added, broadcasting: 3\nI0524 22:20:10.382826 3499 log.go:172] (0xc0008e48f0) Reply frame received for 3\nI0524 22:20:10.382889 3499 log.go:172] (0xc0008e48f0) (0xc00097e1e0) Create stream\nI0524 22:20:10.382909 3499 log.go:172] (0xc0008e48f0) (0xc00097e1e0) Stream added, broadcasting: 5\nI0524 22:20:10.383953 3499 log.go:172] (0xc0008e48f0) Reply frame received for 5\nI0524 22:20:10.473896 3499 log.go:172] (0xc0008e48f0) Data frame received for 5\nI0524 22:20:10.473959 3499 log.go:172] (0xc00097e1e0) (5) Data frame handling\nI0524 22:20:10.473980 3499 log.go:172] (0xc00097e1e0) (5) Data frame sent\nI0524 22:20:10.473997 3499 log.go:172] (0xc0008e48f0) Data frame received for 5\nI0524 22:20:10.474031 3499 log.go:172] (0xc00097e1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 22:20:10.474053 3499 log.go:172] (0xc0008e48f0) Data frame received for 3\nI0524 22:20:10.474070 3499 log.go:172] (0xc00097e0a0) (3) Data frame handling\nI0524 22:20:10.474087 3499 log.go:172] (0xc00097e0a0) (3) Data frame sent\nI0524 22:20:10.474107 3499 log.go:172] (0xc0008e48f0) Data frame received for 3\nI0524 22:20:10.474121 3499 log.go:172] (0xc00097e0a0) (3) Data frame handling\nI0524 22:20:10.476881 3499 log.go:172] (0xc0008e48f0) Data frame received for 1\nI0524 22:20:10.476905 3499 log.go:172] (0xc00097e000) (1) Data frame handling\nI0524 22:20:10.476922 3499 log.go:172] (0xc00097e000) (1) Data frame sent\nI0524 22:20:10.476938 3499 log.go:172] (0xc0008e48f0) (0xc00097e000) Stream removed, broadcasting: 1\nI0524 22:20:10.476955 3499 log.go:172] (0xc0008e48f0) Go away received\nI0524 22:20:10.477574 3499 log.go:172] (0xc0008e48f0) (0xc00097e000) Stream removed, broadcasting: 1\nI0524 22:20:10.477596 3499 log.go:172] (0xc0008e48f0) (0xc00097e0a0) Stream removed, broadcasting: 3\nI0524 22:20:10.477606 3499 log.go:172] (0xc0008e48f0) (0xc00097e1e0) Stream removed, broadcasting: 5\n" May 24 22:20:10.483: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 22:20:10.483: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 22:20:10.487: INFO: Found 1 stateful pods, waiting for 3 May 24 22:20:20.492: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 22:20:20.492: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 22:20:20.492: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 24 22:20:20.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 22:20:20.740: INFO: stderr: "I0524 22:20:20.629380 3521 log.go:172] (0xc000b640b0) (0xc0002c75e0) Create stream\nI0524 22:20:20.629451 3521 log.go:172] (0xc000b640b0) (0xc0002c75e0) Stream added, broadcasting: 1\nI0524 22:20:20.632190 3521 log.go:172] (0xc000b640b0) Reply frame received for 1\nI0524 22:20:20.632234 3521 log.go:172] (0xc000b640b0) (0xc00097a000) Create stream\nI0524 22:20:20.632247 3521 log.go:172] (0xc000b640b0) (0xc00097a000) Stream added, broadcasting: 3\nI0524 22:20:20.633548 3521 log.go:172] (0xc000b640b0) Reply frame received for 3\nI0524 22:20:20.633595 3521 log.go:172] (0xc000b640b0) (0xc00093c000) Create stream\nI0524 22:20:20.633611 3521 log.go:172] (0xc000b640b0) (0xc00093c000) Stream added, broadcasting: 5\nI0524 22:20:20.634620 3521 log.go:172] (0xc000b640b0) Reply frame received for 5\nI0524 22:20:20.732502 3521 log.go:172] (0xc000b640b0) Data frame received for 5\nI0524 22:20:20.732551 3521 log.go:172] (0xc000b640b0) Data frame received for 3\nI0524 22:20:20.732588 3521 log.go:172] (0xc00097a000) (3) Data frame handling\nI0524 22:20:20.732607 3521 log.go:172] (0xc00097a000) (3) Data frame sent\nI0524 22:20:20.732619 3521 log.go:172] (0xc000b640b0) Data frame received for 3\nI0524 22:20:20.732629 3521 log.go:172] (0xc00097a000) (3) Data frame handling\nI0524 22:20:20.732676 3521 log.go:172] (0xc00093c000) (5) Data frame handling\nI0524 22:20:20.732706 3521 log.go:172] (0xc00093c000) (5) Data frame sent\nI0524 22:20:20.732722 3521 log.go:172] (0xc000b640b0) Data frame received for 5\nI0524 22:20:20.732734 3521 log.go:172] (0xc00093c000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 22:20:20.734806 3521 log.go:172] (0xc000b640b0) Data frame received for 1\nI0524 22:20:20.734836 3521 log.go:172] (0xc0002c75e0) (1) Data frame handling\nI0524 22:20:20.734852 3521 log.go:172] (0xc0002c75e0) (1) Data frame sent\nI0524 22:20:20.734870 3521 log.go:172] (0xc000b640b0) (0xc0002c75e0) Stream removed, broadcasting: 1\nI0524 22:20:20.734893 3521 log.go:172] (0xc000b640b0) Go away received\nI0524 22:20:20.735555 3521 log.go:172] (0xc000b640b0) (0xc0002c75e0) Stream removed, broadcasting: 1\nI0524 22:20:20.735598 3521 log.go:172] (0xc000b640b0) (0xc00097a000) Stream removed, broadcasting: 3\nI0524 22:20:20.735621 3521 log.go:172] (0xc000b640b0) (0xc00093c000) Stream removed, broadcasting: 5\n" May 24 22:20:20.740: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 22:20:20.740: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 22:20:20.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 22:20:21.027: INFO: stderr: "I0524 22:20:20.870469 3544 log.go:172] (0xc0000f2fd0) (0xc000708000) Create stream\nI0524 22:20:20.870528 3544 log.go:172] (0xc0000f2fd0) (0xc000708000) Stream added, broadcasting: 1\nI0524 22:20:20.873440 3544 log.go:172] (0xc0000f2fd0) Reply frame received for 1\nI0524 22:20:20.873509 3544 log.go:172] (0xc0000f2fd0) (0xc000683a40) Create stream\nI0524 22:20:20.873530 3544 log.go:172] (0xc0000f2fd0) (0xc000683a40) Stream added, broadcasting: 3\nI0524 22:20:20.874609 3544 log.go:172] (0xc0000f2fd0) Reply frame received for 3\nI0524 22:20:20.874651 3544 log.go:172] (0xc0000f2fd0) (0xc0007080a0) Create stream\nI0524 22:20:20.874664 3544 log.go:172] (0xc0000f2fd0) (0xc0007080a0) Stream added, broadcasting: 5\nI0524 22:20:20.875577 3544 log.go:172] (0xc0000f2fd0) Reply frame received for 5\nI0524 22:20:20.936041 3544 log.go:172] (0xc0000f2fd0) Data frame received for 5\nI0524 22:20:20.936078 3544 log.go:172] (0xc0007080a0) (5) Data frame handling\nI0524 22:20:20.936106 3544 log.go:172] (0xc0007080a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 22:20:21.021415 3544 log.go:172] (0xc0000f2fd0) Data frame received for 3\nI0524 22:20:21.021449 3544 log.go:172] (0xc000683a40) (3) Data frame handling\nI0524 22:20:21.021466 3544 log.go:172] (0xc000683a40) (3) Data frame sent\nI0524 22:20:21.021551 3544 log.go:172] (0xc0000f2fd0) Data frame received for 5\nI0524 22:20:21.021586 3544 log.go:172] (0xc0007080a0) (5) Data frame handling\nI0524 22:20:21.021736 3544 log.go:172] (0xc0000f2fd0) Data frame received for 3\nI0524 22:20:21.021751 3544 log.go:172] (0xc000683a40) (3) Data frame handling\nI0524 22:20:21.023351 3544 log.go:172] (0xc0000f2fd0) Data frame received for 1\nI0524 22:20:21.023389 3544 log.go:172] (0xc000708000) (1) Data frame handling\nI0524 22:20:21.023412 3544 log.go:172] (0xc000708000) (1) Data frame sent\nI0524 22:20:21.023452 3544 log.go:172] (0xc0000f2fd0) (0xc000708000) Stream removed, broadcasting: 1\nI0524 22:20:21.023491 3544 log.go:172] (0xc0000f2fd0) Go away received\nI0524 22:20:21.023926 3544 log.go:172] (0xc0000f2fd0) (0xc000708000) Stream removed, broadcasting: 1\nI0524 22:20:21.023949 3544 log.go:172] (0xc0000f2fd0) (0xc000683a40) Stream removed, broadcasting: 3\nI0524 22:20:21.023961 3544 log.go:172] (0xc0000f2fd0) (0xc0007080a0) Stream removed, broadcasting: 5\n" May 24 22:20:21.028: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 22:20:21.028: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 22:20:21.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 22:20:21.283: INFO: stderr: "I0524 22:20:21.173263 3565 log.go:172] (0xc000ad8000) (0xc000615a40) Create stream\nI0524 22:20:21.173314 3565 log.go:172] (0xc000ad8000) (0xc000615a40) Stream added, broadcasting: 1\nI0524 22:20:21.176185 3565 log.go:172] (0xc000ad8000) Reply frame received for 1\nI0524 22:20:21.176255 3565 log.go:172] (0xc000ad8000) (0xc000956000) Create stream\nI0524 22:20:21.176274 3565 log.go:172] (0xc000ad8000) (0xc000956000) Stream added, broadcasting: 3\nI0524 22:20:21.177898 3565 log.go:172] (0xc000ad8000) Reply frame received for 3\nI0524 22:20:21.177962 3565 log.go:172] (0xc000ad8000) (0xc000615ae0) Create stream\nI0524 22:20:21.177987 3565 log.go:172] (0xc000ad8000) (0xc000615ae0) Stream added, broadcasting: 5\nI0524 22:20:21.179184 3565 log.go:172] (0xc000ad8000) Reply frame received for 5\nI0524 22:20:21.233893 3565 log.go:172] (0xc000ad8000) Data frame received for 5\nI0524 22:20:21.233916 3565 log.go:172] (0xc000615ae0) (5) Data frame handling\nI0524 22:20:21.233924 3565 log.go:172] (0xc000615ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 22:20:21.272668 3565 log.go:172] (0xc000ad8000) Data frame received for 3\nI0524 22:20:21.272845 3565 log.go:172] (0xc000956000) (3) Data frame handling\nI0524 22:20:21.272870 3565 log.go:172] (0xc000956000) (3) Data frame sent\nI0524 22:20:21.273036 3565 log.go:172] (0xc000ad8000) Data frame received for 5\nI0524 22:20:21.273269 3565 log.go:172] (0xc000615ae0) (5) Data frame handling\nI0524 22:20:21.273307 3565 log.go:172] (0xc000ad8000) Data frame received for 3\nI0524 22:20:21.273438 3565 log.go:172] (0xc000956000) (3) Data frame handling\nI0524 22:20:21.275323 3565 log.go:172] (0xc000ad8000) Data frame received for 1\nI0524 22:20:21.275367 3565 log.go:172] (0xc000615a40) (1) Data frame handling\nI0524 22:20:21.275389 3565 log.go:172] (0xc000615a40) (1) Data frame sent\nI0524 22:20:21.275409 3565 log.go:172] (0xc000ad8000) (0xc000615a40) Stream removed, broadcasting: 1\nI0524 22:20:21.275429 3565 log.go:172] (0xc000ad8000) Go away received\nI0524 22:20:21.275966 3565 log.go:172] (0xc000ad8000) (0xc000615a40) Stream removed, broadcasting: 1\nI0524 22:20:21.275990 3565 log.go:172] (0xc000ad8000) (0xc000956000) Stream removed, broadcasting: 3\nI0524 22:20:21.276002 3565 log.go:172] (0xc000ad8000) (0xc000615ae0) Stream removed, broadcasting: 5\n" May 24 22:20:21.283: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 22:20:21.283: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 22:20:21.283: INFO: Waiting for statefulset status.replicas updated to 0 May 24 22:20:21.287: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 24 22:20:31.295: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 22:20:31.295: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 22:20:31.295: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 22:20:31.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999609s May 24 22:20:32.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996364169s May 24 22:20:33.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990851047s May 24 22:20:34.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.960417608s May 24 22:20:35.351: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.955286078s May 24 22:20:36.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.95074957s May 24 22:20:37.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945254419s May 24 22:20:38.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.9409038s May 24 22:20:39.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.935464325s May 24 22:20:40.376: INFO: Verifying statefulset ss doesn't scale past 3 for another 931.480356ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8836 May 24 22:20:41.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 22:20:41.679: INFO: stderr: "I0524 22:20:41.579604 3588 log.go:172] (0xc0009da630) (0xc0008c8000) Create stream\nI0524 22:20:41.579661 3588 log.go:172] (0xc0009da630) (0xc0008c8000) Stream added, broadcasting: 1\nI0524 22:20:41.581545 3588 log.go:172] (0xc0009da630) Reply frame received for 1\nI0524 22:20:41.581578 3588 log.go:172] (0xc0009da630) (0xc0006a9b80) Create stream\nI0524 22:20:41.581588 3588 log.go:172] (0xc0009da630) (0xc0006a9b80) Stream added, broadcasting: 3\nI0524 22:20:41.582498 3588 log.go:172] (0xc0009da630) Reply frame received for 3\nI0524 22:20:41.582524 3588 log.go:172] (0xc0009da630) (0xc0008c80a0) Create stream\nI0524 22:20:41.582532 3588 log.go:172] (0xc0009da630) (0xc0008c80a0) Stream added, broadcasting: 5\nI0524 22:20:41.583253 3588 log.go:172] (0xc0009da630) Reply frame received for 5\nI0524 22:20:41.671701 3588 log.go:172] (0xc0009da630) Data frame received for 5\nI0524 22:20:41.671745 3588 log.go:172] (0xc0008c80a0) (5) Data frame handling\nI0524 22:20:41.671766 3588 log.go:172] (0xc0008c80a0) (5) Data frame sent\nI0524 22:20:41.671792 3588 log.go:172] (0xc0009da630) Data frame received for 5\nI0524 22:20:41.671809 3588 log.go:172] (0xc0008c80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 22:20:41.671851 3588 log.go:172] (0xc0009da630) Data frame received for 3\nI0524 22:20:41.671891 3588 log.go:172] (0xc0006a9b80) (3) Data frame handling\nI0524 22:20:41.671928 3588 log.go:172] (0xc0006a9b80) (3) Data frame sent\nI0524 22:20:41.672205 3588 log.go:172] (0xc0009da630) Data frame received for 3\nI0524 22:20:41.672240 3588 log.go:172] (0xc0006a9b80) (3) Data frame handling\nI0524 22:20:41.674290 3588 log.go:172] (0xc0009da630) Data frame received for 1\nI0524 22:20:41.674408 3588 log.go:172] (0xc0008c8000) (1) Data frame handling\nI0524 22:20:41.674459 3588 log.go:172] (0xc0008c8000) (1) Data frame sent\nI0524 22:20:41.674490 3588 log.go:172] (0xc0009da630) (0xc0008c8000) Stream removed, broadcasting: 1\nI0524 22:20:41.674521 3588 log.go:172] (0xc0009da630) Go away received\nI0524 22:20:41.674943 3588 log.go:172] (0xc0009da630) (0xc0008c8000) Stream removed, broadcasting: 1\nI0524 22:20:41.674965 3588 log.go:172] (0xc0009da630) (0xc0006a9b80) Stream removed, broadcasting: 3\nI0524 22:20:41.674978 3588 log.go:172] (0xc0009da630) (0xc0008c80a0) Stream removed, broadcasting: 5\n" May 24 22:20:41.679: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 22:20:41.679: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 22:20:41.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 22:20:41.898: INFO: stderr: "I0524 22:20:41.814819 3608 log.go:172] (0xc0000f5550) (0xc0005afae0) Create stream\nI0524 22:20:41.814895 3608 log.go:172] (0xc0000f5550) (0xc0005afae0) Stream added, broadcasting: 1\nI0524 22:20:41.817407 3608 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0524 22:20:41.817443 3608 log.go:172] (0xc0000f5550) (0xc0009e4000) Create stream\nI0524 22:20:41.817452 3608 log.go:172] (0xc0000f5550) (0xc0009e4000) Stream added, broadcasting: 3\nI0524 22:20:41.818371 3608 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0524 22:20:41.818455 3608 log.go:172] (0xc0000f5550) (0xc0003cc000) Create stream\nI0524 22:20:41.818490 3608 log.go:172] (0xc0000f5550) (0xc0003cc000) Stream added, broadcasting: 5\nI0524 22:20:41.819451 3608 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0524 22:20:41.890598 3608 log.go:172] (0xc0000f5550) Data frame received for 3\nI0524 22:20:41.890662 3608 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0524 22:20:41.890687 3608 log.go:172] (0xc0009e4000) (3) Data frame sent\nI0524 22:20:41.890708 3608 log.go:172] (0xc0000f5550) Data frame received for 3\nI0524 22:20:41.890736 3608 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0524 22:20:41.890762 3608 log.go:172] (0xc0000f5550) Data frame received for 5\nI0524 22:20:41.890779 3608 log.go:172] (0xc0003cc000) (5) Data frame handling\nI0524 22:20:41.890793 3608 log.go:172] (0xc0003cc000) (5) Data frame sent\nI0524 22:20:41.890813 3608 log.go:172] (0xc0000f5550) Data frame received for 5\nI0524 22:20:41.890822 3608 log.go:172] (0xc0003cc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 22:20:41.892330 3608 log.go:172] (0xc0000f5550) Data frame received for 1\nI0524 22:20:41.892364 3608 log.go:172] (0xc0005afae0) (1) Data frame handling\nI0524 22:20:41.892382 3608 log.go:172] (0xc0005afae0) (1) Data frame sent\nI0524 22:20:41.892405 3608 log.go:172] (0xc0000f5550) (0xc0005afae0) Stream removed, broadcasting: 1\nI0524 22:20:41.892446 3608 log.go:172] (0xc0000f5550) Go away received\nI0524 22:20:41.892969 3608 log.go:172] (0xc0000f5550) (0xc0005afae0) Stream removed, broadcasting: 1\nI0524 22:20:41.893003 3608 log.go:172] (0xc0000f5550) (0xc0009e4000) Stream removed, broadcasting: 3\nI0524 22:20:41.893016 3608 log.go:172] (0xc0000f5550) (0xc0003cc000) Stream removed, broadcasting: 5\n" May 24 22:20:41.898: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 22:20:41.898: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 22:20:41.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8836 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 22:20:42.109: INFO: stderr: "I0524 22:20:42.032866 3631 log.go:172] (0xc0004bd130) (0xc0007ee1e0) Create stream\nI0524 22:20:42.032931 3631 log.go:172] (0xc0004bd130) (0xc0007ee1e0) Stream added, broadcasting: 1\nI0524 22:20:42.036207 3631 log.go:172] (0xc0004bd130) Reply frame received for 1\nI0524 22:20:42.036418 3631 log.go:172] (0xc0004bd130) (0xc000a8a000) Create stream\nI0524 22:20:42.036450 3631 log.go:172] (0xc0004bd130) (0xc000a8a000) Stream added, broadcasting: 3\nI0524 22:20:42.037550 3631 log.go:172] (0xc0004bd130) Reply frame received for 3\nI0524 22:20:42.037588 3631 log.go:172] (0xc0004bd130) (0xc0006b1ae0) Create stream\nI0524 22:20:42.037600 3631 log.go:172] (0xc0004bd130) (0xc0006b1ae0) Stream added, broadcasting: 5\nI0524 22:20:42.038423 3631 log.go:172] (0xc0004bd130) Reply frame received for 5\nI0524 22:20:42.102698 3631 log.go:172] (0xc0004bd130) Data frame received for 5\nI0524 22:20:42.102722 3631 log.go:172] (0xc0006b1ae0) (5) Data frame handling\nI0524 22:20:42.102733 3631 log.go:172] (0xc0006b1ae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 22:20:42.102751 3631 log.go:172] (0xc0004bd130) Data frame received for 3\nI0524 22:20:42.102777 3631 log.go:172] (0xc000a8a000) (3) Data frame handling\nI0524 22:20:42.102794 3631 log.go:172] (0xc000a8a000) (3) Data frame sent\nI0524 22:20:42.102804 3631 log.go:172] (0xc0004bd130) Data frame received for 3\nI0524 22:20:42.102811 3631 log.go:172] (0xc000a8a000) (3) Data frame handling\nI0524 22:20:42.102837 3631 log.go:172] (0xc0004bd130) Data frame received for 5\nI0524 22:20:42.102845 3631 log.go:172] (0xc0006b1ae0) (5) Data frame handling\nI0524 22:20:42.104104 3631 log.go:172] (0xc0004bd130) Data frame received for 1\nI0524 22:20:42.104117 3631 log.go:172] (0xc0007ee1e0) (1) Data frame handling\nI0524 22:20:42.104128 3631 log.go:172] (0xc0007ee1e0) (1) Data frame sent\nI0524 22:20:42.104138 3631 log.go:172] (0xc0004bd130) (0xc0007ee1e0) Stream removed, broadcasting: 1\nI0524 22:20:42.104251 3631 log.go:172] (0xc0004bd130) Go away received\nI0524 22:20:42.104446 3631 log.go:172] (0xc0004bd130) (0xc0007ee1e0) Stream removed, broadcasting: 1\nI0524 22:20:42.104458 3631 log.go:172] (0xc0004bd130) (0xc000a8a000) Stream removed, broadcasting: 3\nI0524 22:20:42.104464 3631 log.go:172] (0xc0004bd130) (0xc0006b1ae0) Stream removed, broadcasting: 5\n" May 24 22:20:42.110: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 22:20:42.110: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 22:20:42.110: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 24 22:21:12.126: INFO: Deleting all statefulset in ns statefulset-8836 May 24 22:21:12.129: INFO: Scaling statefulset ss to 0 May 24 22:21:12.139: INFO: Waiting for statefulset status.replicas updated to 0 May 24 22:21:12.142: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:12.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8836" for this suite. • [SLOW TEST:92.368 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":264,"skipped":4392,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:12.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 24 22:21:12.247: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585" in namespace "downward-api-2365" to be "success or failure" May 24 22:21:12.250: INFO: Pod "downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585": Phase="Pending", Reason="", readiness=false. Elapsed: 3.255152ms May 24 22:21:14.318: INFO: Pod "downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070634109s May 24 22:21:16.322: INFO: Pod "downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075128523s STEP: Saw pod success May 24 22:21:16.322: INFO: Pod "downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585" satisfied condition "success or failure" May 24 22:21:16.324: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585 container client-container: STEP: delete the pod May 24 22:21:16.390: INFO: Waiting for pod downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585 to disappear May 24 22:21:16.408: INFO: Pod downwardapi-volume-57f0ec3b-5562-4463-99cb-55775a1a9585 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:16.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2365" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4404,"failed":0} ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:16.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 24 22:21:16.500: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix262457049/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:16.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1528" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":266,"skipped":4404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:16.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 22:21:16.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2826' May 24 22:21:17.034: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 22:21:17.034: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 24 22:21:19.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2826' May 24 22:21:19.293: INFO: stderr: "" May 24 22:21:19.293: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:19.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2826" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":267,"skipped":4429,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:19.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6bff8b9d-ce78-4d23-947b-9fd1bcb2fb27 STEP: Creating a pod to test consume secrets May 24 22:21:19.506: INFO: Waiting up to 5m0s for pod "pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8" in namespace "secrets-3491" to be "success or failure" May 24 22:21:19.510: INFO: Pod "pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073644ms May 24 22:21:21.554: INFO: Pod "pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048313336s May 24 22:21:23.559: INFO: Pod "pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052862267s STEP: Saw pod success May 24 22:21:23.559: INFO: Pod "pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8" satisfied condition "success or failure" May 24 22:21:23.562: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8 container secret-volume-test: STEP: delete the pod May 24 22:21:23.670: INFO: Waiting for pod pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8 to disappear May 24 22:21:23.712: INFO: Pod pod-secrets-9c61ab53-d91e-4477-86d7-71d75b1ba9b8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:23.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3491" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4443,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:23.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 22:21:23.828: INFO: Waiting up to 5m0s for pod "pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3" in namespace "emptydir-7882" to be "success or failure" May 24 22:21:23.831: INFO: Pod "pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.815395ms May 24 22:21:25.836: INFO: Pod "pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007912804s May 24 22:21:27.841: INFO: Pod "pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012919907s STEP: Saw pod success May 24 22:21:27.841: INFO: Pod "pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3" satisfied condition "success or failure" May 24 22:21:27.845: INFO: Trying to get logs from node jerma-worker pod pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3 container test-container: STEP: delete the pod May 24 22:21:27.875: INFO: Waiting for pod pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3 to disappear May 24 22:21:27.879: INFO: Pod pod-96eb88b7-e1ce-4029-b69e-cb98020bd0f3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:27.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7882" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4449,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:27.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 22:21:28.478: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 22:21:30.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955688, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955688, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955688, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725955688, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 22:21:33.689: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:33.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4839" for this suite. STEP: Destroying namespace "webhook-4839-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.958 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":270,"skipped":4449,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:33.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-3fbbd6d4-2c29-4e32-8893-e9f5a4d449af STEP: Creating a pod to test consume configMaps May 24 22:21:34.344: INFO: Waiting up to 5m0s for pod "pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d" in namespace "configmap-5998" to be "success or failure" May 24 22:21:34.371: INFO: Pod "pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.62132ms May 24 22:21:36.450: INFO: Pod "pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105837387s May 24 22:21:38.454: INFO: Pod "pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11065324s STEP: Saw pod success May 24 22:21:38.455: INFO: Pod "pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d" satisfied condition "success or failure" May 24 22:21:38.458: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d container configmap-volume-test: STEP: delete the pod May 24 22:21:38.476: INFO: Waiting for pod pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d to disappear May 24 22:21:38.521: INFO: Pod pod-configmaps-d69481e9-91f5-4341-9aba-bbdc2d7acf3d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:38.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5998" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4456,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:38.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:21:38.603: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4a6b2390-3b88-4f89-9bd2-075510f1dc21" in namespace "security-context-test-3856" to be "success or failure" May 24 22:21:38.606: INFO: Pod "busybox-user-65534-4a6b2390-3b88-4f89-9bd2-075510f1dc21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.937743ms May 24 22:21:40.610: INFO: Pod "busybox-user-65534-4a6b2390-3b88-4f89-9bd2-075510f1dc21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007162372s May 24 22:21:42.614: INFO: Pod "busybox-user-65534-4a6b2390-3b88-4f89-9bd2-075510f1dc21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011288255s May 24 22:21:42.615: INFO: Pod "busybox-user-65534-4a6b2390-3b88-4f89-9bd2-075510f1dc21" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:42.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3856" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4467,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:42.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 24 22:21:42.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 24 22:21:42.751: INFO: stderr: "" May 24 22:21:42.751: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:42.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9032" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":273,"skipped":4468,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:42.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:21:42.801: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 24 22:21:45.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 create -f -' May 24 22:21:49.580: INFO: stderr: "" May 24 22:21:49.580: INFO: stdout: "e2e-test-crd-publish-openapi-7353-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 24 22:21:49.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 delete e2e-test-crd-publish-openapi-7353-crds test-foo' May 24 22:21:49.686: INFO: stderr: "" May 24 22:21:49.686: INFO: stdout: "e2e-test-crd-publish-openapi-7353-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 24 22:21:49.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 apply -f -' May 24 22:21:49.940: INFO: stderr: "" May 24 22:21:49.940: INFO: stdout: "e2e-test-crd-publish-openapi-7353-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 24 22:21:49.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 delete e2e-test-crd-publish-openapi-7353-crds test-foo' May 24 22:21:50.070: INFO: stderr: "" May 24 22:21:50.070: INFO: stdout: "e2e-test-crd-publish-openapi-7353-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 24 22:21:50.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 create -f -' May 24 22:21:50.301: INFO: rc: 1 May 24 22:21:50.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 apply -f -' May 24 22:21:50.525: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 24 22:21:50.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 create -f -' May 24 22:21:50.765: INFO: rc: 1 May 24 22:21:50.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4394 apply -f -' May 24 22:21:51.007: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 24 22:21:51.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7353-crds' May 24 22:21:51.263: INFO: stderr: "" May 24 22:21:51.263: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7353-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 24 22:21:51.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7353-crds.metadata' May 24 22:21:51.487: INFO: stderr: "" May 24 22:21:51.487: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7353-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 24 22:21:51.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7353-crds.spec' May 24 22:21:51.780: INFO: stderr: "" May 24 22:21:51.780: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7353-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 24 22:21:51.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7353-crds.spec.bars' May 24 22:21:52.010: INFO: stderr: "" May 24 22:21:52.010: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7353-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 24 22:21:52.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7353-crds.spec.bars2' May 24 22:21:52.241: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:55.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4394" for this suite. • [SLOW TEST:12.391 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":274,"skipped":4470,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:55.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 24 22:21:55.249: INFO: Waiting up to 5m0s for pod "var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd" in namespace "var-expansion-9019" to be "success or failure" May 24 22:21:55.306: INFO: Pod "var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd": Phase="Pending", Reason="", readiness=false. Elapsed: 57.215417ms May 24 22:21:57.318: INFO: Pod "var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069109366s May 24 22:21:59.321: INFO: Pod "var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072613584s STEP: Saw pod success May 24 22:21:59.321: INFO: Pod "var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd" satisfied condition "success or failure" May 24 22:21:59.359: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd container dapi-container: STEP: delete the pod May 24 22:21:59.394: INFO: Waiting for pod var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd to disappear May 24 22:21:59.402: INFO: Pod var-expansion-2fbda116-6a4f-49ec-a320-f58ab08479dd no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:21:59.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9019" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:21:59.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-bfb649f8-88e7-4fc5-85e6-e0d14e462a1f STEP: Creating a pod to test consume secrets May 24 22:21:59.514: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7" in namespace "projected-9538" to be "success or failure" May 24 22:21:59.523: INFO: Pod "pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.539438ms May 24 22:22:01.683: INFO: Pod "pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169567663s May 24 22:22:03.687: INFO: Pod "pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173575832s STEP: Saw pod success May 24 22:22:03.687: INFO: Pod "pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7" satisfied condition "success or failure" May 24 22:22:03.690: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7 container projected-secret-volume-test: STEP: delete the pod May 24 22:22:03.787: INFO: Waiting for pod pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7 to disappear May 24 22:22:03.793: INFO: Pod pod-projected-secrets-f09f68b4-cb91-4402-98d9-57eabcaa04d7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:22:03.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9538" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4510,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:22:03.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 24 22:22:07.925: INFO: Waiting up to 5m0s for pod "client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16" in namespace "pods-530" to be "success or failure" May 24 22:22:07.941: INFO: Pod "client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16": Phase="Pending", Reason="", readiness=false. Elapsed: 16.261343ms May 24 22:22:09.959: INFO: Pod "client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033442286s May 24 22:22:11.963: INFO: Pod "client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16": Phase="Running", Reason="", readiness=true. Elapsed: 4.038140331s May 24 22:22:13.968: INFO: Pod "client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04233447s STEP: Saw pod success May 24 22:22:13.968: INFO: Pod "client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16" satisfied condition "success or failure" May 24 22:22:13.970: INFO: Trying to get logs from node jerma-worker pod client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16 container env3cont: STEP: delete the pod May 24 22:22:14.004: INFO: Waiting for pod client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16 to disappear May 24 22:22:14.010: INFO: Pod client-envvars-f7377309-53c9-4328-b2ff-9ae2dfe7cd16 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:22:14.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-530" for this suite. • [SLOW TEST:10.218 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4515,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 24 22:22:14.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 24 22:22:18.638: INFO: Successfully updated pod "annotationupdate0b3eff35-a8da-40f9-bd3d-2a05640d9e50" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 24 22:22:20.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8348" for this suite. • [SLOW TEST:6.644 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 24 22:22:20.662: INFO: Running AfterSuite actions on all nodes May 24 22:22:20.662: INFO: Running AfterSuite actions on node 1 May 24 22:22:20.662: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4319.952 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS