I0607 21:08:42.175933 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0607 21:08:42.176154 6 e2e.go:109] Starting e2e run "f94a6f7f-8e5a-4257-a60b-544b9a974868" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591564121 - Will randomize all specs Will run 278 of 4842 specs Jun 7 21:08:42.243: INFO: >>> kubeConfig: /root/.kube/config Jun 7 21:08:42.248: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 7 21:08:42.273: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 7 21:08:42.310: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 7 21:08:42.310: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 7 21:08:42.310: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 7 21:08:42.317: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 7 21:08:42.317: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 7 21:08:42.317: INFO: e2e test version: v1.17.4 Jun 7 21:08:42.318: INFO: kube-apiserver version: v1.17.2 Jun 7 21:08:42.318: INFO: >>> kubeConfig: /root/.kube/config Jun 7 21:08:42.321: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:08:42.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota Jun 7 21:08:42.430: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:08:55.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-48" for this suite. • [SLOW TEST:13.222 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":1,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:08:55.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:08:59.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5328" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:08:59.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c45804fa-07ba-4ca6-86ad-4673efd4d466 STEP: Creating a pod to test consume configMaps Jun 7 21:08:59.811: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a" in namespace "projected-5816" to be "success or failure" Jun 7 21:08:59.822: INFO: Pod "pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.033426ms Jun 7 21:09:01.826: INFO: Pod "pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015466657s Jun 7 21:09:03.831: INFO: Pod "pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a": Phase="Running", Reason="", readiness=true. Elapsed: 4.020173214s Jun 7 21:09:05.835: INFO: Pod "pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024211727s STEP: Saw pod success Jun 7 21:09:05.835: INFO: Pod "pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a" satisfied condition "success or failure" Jun 7 21:09:05.838: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a container projected-configmap-volume-test: STEP: delete the pod Jun 7 21:09:05.888: INFO: Waiting for pod pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a to disappear Jun 7 21:09:05.904: INFO: Pod pod-projected-configmaps-37044984-f55e-43a5-862f-f8253c6fef6a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:09:05.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5816" for this suite. • [SLOW TEST:6.236 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":89,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:09:05.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:09:05.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 7 21:09:06.117: INFO: stderr: "" Jun 7 21:09:06.117: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:09:06.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6669" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":4,"skipped":93,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:09:06.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6323 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6323 I0607 21:09:06.295416 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6323, replica count: 2 I0607 21:09:09.345830 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:09:12.346143 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 7 21:09:12.346: INFO: Creating new exec pod Jun 7 21:09:17.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6323 execpod52lh8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 7 21:09:20.007: INFO: stderr: "I0607 21:09:19.844565 48 log.go:172] (0xc0003e2000) (0xc00070bae0) Create stream\nI0607 21:09:19.844643 48 log.go:172] (0xc0003e2000) (0xc00070bae0) Stream added, broadcasting: 1\nI0607 21:09:19.847628 48 log.go:172] (0xc0003e2000) Reply frame received for 1\nI0607 21:09:19.847671 48 log.go:172] (0xc0003e2000) (0xc0008ba140) Create stream\nI0607 21:09:19.847680 48 log.go:172] (0xc0003e2000) (0xc0008ba140) Stream added, broadcasting: 3\nI0607 21:09:19.848559 48 log.go:172] (0xc0003e2000) Reply frame received for 3\nI0607 21:09:19.848599 48 log.go:172] (0xc0003e2000) (0xc0008ba1e0) Create stream\nI0607 21:09:19.848610 48 log.go:172] (0xc0003e2000) (0xc0008ba1e0) Stream added, broadcasting: 5\nI0607 21:09:19.849686 48 log.go:172] (0xc0003e2000) Reply frame received for 5\nI0607 21:09:19.978240 48 log.go:172] (0xc0003e2000) Data frame received for 5\nI0607 21:09:19.978261 48 log.go:172] (0xc0008ba1e0) (5) Data frame handling\nI0607 21:09:19.978271 48 log.go:172] (0xc0008ba1e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0607 21:09:19.998168 48 log.go:172] (0xc0003e2000) Data frame received for 3\nI0607 21:09:19.998220 48 log.go:172] (0xc0008ba140) (3) Data frame handling\nI0607 21:09:19.998267 48 log.go:172] (0xc0003e2000) Data frame received for 5\nI0607 21:09:19.998402 48 log.go:172] (0xc0008ba1e0) (5) Data frame handling\nI0607 21:09:19.998427 48 log.go:172] (0xc0008ba1e0) (5) Data frame sent\nI0607 21:09:19.998460 48 log.go:172] (0xc0003e2000) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0607 21:09:19.998489 48 log.go:172] (0xc0008ba1e0) (5) Data frame handling\nI0607 21:09:20.000075 48 log.go:172] (0xc0003e2000) Data frame received for 1\nI0607 21:09:20.000096 48 log.go:172] (0xc00070bae0) (1) Data frame handling\nI0607 21:09:20.000116 48 log.go:172] (0xc00070bae0) (1) Data frame sent\nI0607 21:09:20.000134 48 log.go:172] (0xc0003e2000) (0xc00070bae0) Stream removed, broadcasting: 1\nI0607 21:09:20.000337 48 log.go:172] (0xc0003e2000) Go away received\nI0607 21:09:20.000678 48 log.go:172] (0xc0003e2000) (0xc00070bae0) Stream removed, broadcasting: 1\nI0607 21:09:20.000700 48 log.go:172] (0xc0003e2000) (0xc0008ba140) Stream removed, broadcasting: 3\nI0607 21:09:20.000711 48 log.go:172] (0xc0003e2000) (0xc0008ba1e0) Stream removed, broadcasting: 5\n" Jun 7 21:09:20.007: INFO: stdout: "" Jun 7 21:09:20.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6323 execpod52lh8 -- /bin/sh -x -c nc -zv -t -w 2 10.99.137.105 80' Jun 7 21:09:20.222: INFO: stderr: "I0607 21:09:20.148201 78 log.go:172] (0xc0009e00b0) (0xc000a6c320) Create stream\nI0607 21:09:20.148260 78 log.go:172] (0xc0009e00b0) (0xc000a6c320) Stream added, broadcasting: 1\nI0607 21:09:20.150918 78 log.go:172] (0xc0009e00b0) Reply frame received for 1\nI0607 21:09:20.150948 78 log.go:172] (0xc0009e00b0) (0xc000a6c3c0) Create stream\nI0607 21:09:20.150956 78 log.go:172] (0xc0009e00b0) (0xc000a6c3c0) Stream added, broadcasting: 3\nI0607 21:09:20.151710 78 log.go:172] (0xc0009e00b0) Reply frame received for 3\nI0607 21:09:20.151746 78 log.go:172] (0xc0009e00b0) (0xc0009ac000) Create stream\nI0607 21:09:20.151754 78 log.go:172] (0xc0009e00b0) (0xc0009ac000) Stream added, broadcasting: 5\nI0607 21:09:20.152431 78 log.go:172] (0xc0009e00b0) Reply frame received for 5\nI0607 21:09:20.214621 78 log.go:172] (0xc0009e00b0) Data frame received for 5\nI0607 21:09:20.214697 78 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0607 21:09:20.214712 78 log.go:172] (0xc0009ac000) (5) Data frame sent\n+ nc -zv -t -w 2 10.99.137.105 80\nConnection to 10.99.137.105 80 port [tcp/http] succeeded!\nI0607 21:09:20.214749 78 log.go:172] (0xc0009e00b0) Data frame received for 3\nI0607 21:09:20.214767 78 log.go:172] (0xc000a6c3c0) (3) Data frame handling\nI0607 21:09:20.215010 78 log.go:172] (0xc0009e00b0) Data frame received for 5\nI0607 21:09:20.215032 78 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0607 21:09:20.216352 78 log.go:172] (0xc0009e00b0) Data frame received for 1\nI0607 21:09:20.216369 78 log.go:172] (0xc000a6c320) (1) Data frame handling\nI0607 21:09:20.216383 78 log.go:172] (0xc000a6c320) (1) Data frame sent\nI0607 21:09:20.216396 78 log.go:172] (0xc0009e00b0) (0xc000a6c320) Stream removed, broadcasting: 1\nI0607 21:09:20.216411 78 log.go:172] (0xc0009e00b0) Go away received\nI0607 21:09:20.216806 78 log.go:172] (0xc0009e00b0) (0xc000a6c320) Stream removed, broadcasting: 1\nI0607 21:09:20.216823 78 log.go:172] (0xc0009e00b0) (0xc000a6c3c0) Stream removed, broadcasting: 3\nI0607 21:09:20.216830 78 log.go:172] (0xc0009e00b0) (0xc0009ac000) Stream removed, broadcasting: 5\n" Jun 7 21:09:20.222: INFO: stdout: "" Jun 7 21:09:20.222: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:09:20.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6323" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.130 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":5,"skipped":93,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:09:20.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 7 21:09:20.334: INFO: Waiting up to 5m0s for pod "pod-f5cb0f68-3456-432c-bbbf-861f93efca73" in namespace "emptydir-6397" to be "success or failure" Jun 7 21:09:20.357: INFO: Pod "pod-f5cb0f68-3456-432c-bbbf-861f93efca73": Phase="Pending", Reason="", readiness=false. Elapsed: 23.049655ms Jun 7 21:09:22.362: INFO: Pod "pod-f5cb0f68-3456-432c-bbbf-861f93efca73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028072594s Jun 7 21:09:24.366: INFO: Pod "pod-f5cb0f68-3456-432c-bbbf-861f93efca73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032534018s STEP: Saw pod success Jun 7 21:09:24.366: INFO: Pod "pod-f5cb0f68-3456-432c-bbbf-861f93efca73" satisfied condition "success or failure" Jun 7 21:09:24.369: INFO: Trying to get logs from node jerma-worker2 pod pod-f5cb0f68-3456-432c-bbbf-861f93efca73 container test-container: STEP: delete the pod Jun 7 21:09:24.422: INFO: Waiting for pod pod-f5cb0f68-3456-432c-bbbf-861f93efca73 to disappear Jun 7 21:09:24.443: INFO: Pod pod-f5cb0f68-3456-432c-bbbf-861f93efca73 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:09:24.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6397" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:09:24.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jun 7 21:09:24.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6191' Jun 7 21:09:24.904: INFO: stderr: "" Jun 7 21:09:24.904: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 21:09:24.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6191' Jun 7 21:09:25.027: INFO: stderr: "" Jun 7 21:09:25.027: INFO: stdout: "update-demo-nautilus-k6l74 update-demo-nautilus-ktctd " Jun 7 21:09:25.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k6l74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6191' Jun 7 21:09:25.141: INFO: stderr: "" Jun 7 21:09:25.141: INFO: stdout: "" Jun 7 21:09:25.141: INFO: update-demo-nautilus-k6l74 is created but not running Jun 7 21:09:30.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6191' Jun 7 21:09:30.259: INFO: stderr: "" Jun 7 21:09:30.259: INFO: stdout: "update-demo-nautilus-k6l74 update-demo-nautilus-ktctd " Jun 7 21:09:30.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k6l74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6191' Jun 7 21:09:30.364: INFO: stderr: "" Jun 7 21:09:30.364: INFO: stdout: "true" Jun 7 21:09:30.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k6l74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6191' Jun 7 21:09:30.462: INFO: stderr: "" Jun 7 21:09:30.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:09:30.462: INFO: validating pod update-demo-nautilus-k6l74 Jun 7 21:09:30.487: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:09:30.487: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:09:30.487: INFO: update-demo-nautilus-k6l74 is verified up and running Jun 7 21:09:30.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktctd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6191' Jun 7 21:09:30.592: INFO: stderr: "" Jun 7 21:09:30.593: INFO: stdout: "true" Jun 7 21:09:30.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktctd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6191' Jun 7 21:09:30.697: INFO: stderr: "" Jun 7 21:09:30.697: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:09:30.697: INFO: validating pod update-demo-nautilus-ktctd Jun 7 21:09:30.708: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:09:30.708: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:09:30.708: INFO: update-demo-nautilus-ktctd is verified up and running STEP: using delete to clean up resources Jun 7 21:09:30.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6191' Jun 7 21:09:30.823: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:09:30.824: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 7 21:09:30.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6191' Jun 7 21:09:30.929: INFO: stderr: "No resources found in kubectl-6191 namespace.\n" Jun 7 21:09:30.929: INFO: stdout: "" Jun 7 21:09:30.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6191 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 21:09:31.033: INFO: stderr: "" Jun 7 21:09:31.033: INFO: stdout: "update-demo-nautilus-k6l74\nupdate-demo-nautilus-ktctd\n" Jun 7 21:09:31.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6191' Jun 7 21:09:31.633: INFO: stderr: "No resources found in kubectl-6191 namespace.\n" Jun 7 21:09:31.633: INFO: stdout: "" Jun 7 21:09:31.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6191 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 21:09:31.730: INFO: stderr: "" Jun 7 21:09:31.730: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:09:31.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6191" for this suite. • [SLOW TEST:7.284 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":7,"skipped":120,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:09:31.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 7 21:09:32.377: INFO: Waiting up to 5m0s for pod "pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986" in namespace "emptydir-6029" to be "success or failure" Jun 7 21:09:32.643: INFO: Pod "pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986": Phase="Pending", Reason="", readiness=false. Elapsed: 266.830039ms Jun 7 21:09:34.665: INFO: Pod "pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288513677s Jun 7 21:09:36.670: INFO: Pod "pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986": Phase="Running", Reason="", readiness=true. Elapsed: 4.293198667s Jun 7 21:09:38.674: INFO: Pod "pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.297755504s STEP: Saw pod success Jun 7 21:09:38.674: INFO: Pod "pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986" satisfied condition "success or failure" Jun 7 21:09:38.677: INFO: Trying to get logs from node jerma-worker2 pod pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986 container test-container: STEP: delete the pod Jun 7 21:09:38.703: INFO: Waiting for pod pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986 to disappear Jun 7 21:09:38.715: INFO: Pod pod-bda0fd43-1a4c-4ee9-9ec1-6f5db53fe986 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:09:38.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6029" for this suite. • [SLOW TEST:6.985 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:09:38.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jun 7 21:09:38.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 7 21:09:39.008: INFO: stderr: "" Jun 7 21:09:39.008: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:09:39.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-500" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":9,"skipped":154,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:09:39.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:10:10.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1607" for this suite. STEP: Destroying namespace "nsdeletetest-4557" for this suite. Jun 7 21:10:10.446: INFO: Namespace nsdeletetest-4557 was already deleted STEP: Destroying namespace "nsdeletetest-9549" for this suite. • [SLOW TEST:31.434 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":10,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:10:10.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:10:15.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7877" for this suite. • [SLOW TEST:5.281 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":11,"skipped":214,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:10:15.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 7 21:10:15.843: INFO: Waiting up to 5m0s for pod "downward-api-e46b86ab-77f7-46de-aa77-971bb979226b" in namespace "downward-api-612" to be "success or failure" Jun 7 21:10:15.858: INFO: Pod "downward-api-e46b86ab-77f7-46de-aa77-971bb979226b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.454446ms Jun 7 21:10:17.862: INFO: Pod "downward-api-e46b86ab-77f7-46de-aa77-971bb979226b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019452641s Jun 7 21:10:19.866: INFO: Pod "downward-api-e46b86ab-77f7-46de-aa77-971bb979226b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023726008s STEP: Saw pod success Jun 7 21:10:19.866: INFO: Pod "downward-api-e46b86ab-77f7-46de-aa77-971bb979226b" satisfied condition "success or failure" Jun 7 21:10:19.870: INFO: Trying to get logs from node jerma-worker2 pod downward-api-e46b86ab-77f7-46de-aa77-971bb979226b container dapi-container: STEP: delete the pod Jun 7 21:10:19.906: INFO: Waiting for pod downward-api-e46b86ab-77f7-46de-aa77-971bb979226b to disappear Jun 7 21:10:19.918: INFO: Pod downward-api-e46b86ab-77f7-46de-aa77-971bb979226b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:10:19.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-612" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":220,"failed":0} ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:10:19.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 7 21:10:19.993: INFO: Waiting up to 5m0s for pod "downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788" in namespace "downward-api-47" to be "success or failure" Jun 7 21:10:20.002: INFO: Pod "downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687199ms Jun 7 21:10:22.009: INFO: Pod "downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015370872s Jun 7 21:10:24.014: INFO: Pod "downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020338779s STEP: Saw pod success Jun 7 21:10:24.014: INFO: Pod "downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788" satisfied condition "success or failure" Jun 7 21:10:24.017: INFO: Trying to get logs from node jerma-worker pod downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788 container dapi-container: STEP: delete the pod Jun 7 21:10:24.052: INFO: Waiting for pod downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788 to disappear Jun 7 21:10:24.056: INFO: Pod downward-api-26143ed4-01d9-4417-9d9c-aeb6e5a10788 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:10:24.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-47" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:10:24.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-6e894754-95d3-4b0e-bb46-2d100b2acc5c STEP: Creating secret with name secret-projected-all-test-volume-3226e4aa-2b00-4f4c-87f9-26974ff16bb0 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 7 21:10:24.153: INFO: Waiting up to 5m0s for pod "projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2" in namespace "projected-9105" to be "success or failure" Jun 7 21:10:24.175: INFO: Pod "projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.558001ms Jun 7 21:10:26.220: INFO: Pod "projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066960017s Jun 7 21:10:28.224: INFO: Pod "projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2": Phase="Running", Reason="", readiness=true. Elapsed: 4.071439724s Jun 7 21:10:30.229: INFO: Pod "projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076314177s STEP: Saw pod success Jun 7 21:10:30.229: INFO: Pod "projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2" satisfied condition "success or failure" Jun 7 21:10:30.233: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2 container projected-all-volume-test: STEP: delete the pod Jun 7 21:10:30.251: INFO: Waiting for pod projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2 to disappear Jun 7 21:10:30.272: INFO: Pod projected-volume-50a3592c-af51-4d74-a74a-d1c95ec404f2 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:10:30.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9105" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":14,"skipped":277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:10:30.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 7 21:10:30.336: INFO: Waiting up to 5m0s for pod "pod-03348228-9d3f-46c4-a38f-cbdaf0dec079" in namespace "emptydir-6407" to be "success or failure" Jun 7 21:10:30.364: INFO: Pod "pod-03348228-9d3f-46c4-a38f-cbdaf0dec079": Phase="Pending", Reason="", readiness=false. Elapsed: 27.510111ms Jun 7 21:10:32.367: INFO: Pod "pod-03348228-9d3f-46c4-a38f-cbdaf0dec079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031373703s Jun 7 21:10:34.371: INFO: Pod "pod-03348228-9d3f-46c4-a38f-cbdaf0dec079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035098129s STEP: Saw pod success Jun 7 21:10:34.371: INFO: Pod "pod-03348228-9d3f-46c4-a38f-cbdaf0dec079" satisfied condition "success or failure" Jun 7 21:10:34.374: INFO: Trying to get logs from node jerma-worker pod pod-03348228-9d3f-46c4-a38f-cbdaf0dec079 container test-container: STEP: delete the pod Jun 7 21:10:34.395: INFO: Waiting for pod pod-03348228-9d3f-46c4-a38f-cbdaf0dec079 to disappear Jun 7 21:10:34.399: INFO: Pod pod-03348228-9d3f-46c4-a38f-cbdaf0dec079 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:10:34.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6407" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:10:34.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1155, will wait for the garbage collector to delete the pods Jun 7 21:10:38.855: INFO: Deleting Job.batch foo took: 7.026305ms Jun 7 21:10:39.155: INFO: Terminating Job.batch foo pods took: 300.254975ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:11:19.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1155" for this suite. • [SLOW TEST:45.212 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":16,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:11:19.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 7 21:11:19.727: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 21:11:19.736: INFO: Waiting for terminating namespaces to be deleted... Jun 7 21:11:19.739: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 7 21:11:19.743: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:11:19.744: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 21:11:19.744: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:11:19.744: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 21:11:19.744: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 7 21:11:19.750: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:11:19.750: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 21:11:19.750: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 7 21:11:19.750: INFO: Container kube-bench ready: false, restart count 0 Jun 7 21:11:19.750: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:11:19.750: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 21:11:19.750: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 7 21:11:19.750: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-25ff71d3-15ae-44bf-a2c3-69bbeeb883f3 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-25ff71d3-15ae-44bf-a2c3-69bbeeb883f3 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-25ff71d3-15ae-44bf-a2c3-69bbeeb883f3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:16:27.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-915" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.324 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":17,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:16:27.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:16:28.077: INFO: Creating ReplicaSet my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518 Jun 7 21:16:28.089: INFO: Pod name my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518: Found 0 pods out of 1 Jun 7 21:16:33.141: INFO: Pod name my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518: Found 1 pods out of 1 Jun 7 21:16:33.141: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518" is running Jun 7 21:16:33.149: INFO: Pod "my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518-z9nh5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:16:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:16:31 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:16:31 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:16:28 +0000 UTC Reason: Message:}]) Jun 7 21:16:33.149: INFO: Trying to dial the pod Jun 7 21:16:38.168: INFO: Controller my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518: Got expected result from replica 1 [my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518-z9nh5]: "my-hostname-basic-a8ab0117-5abd-4d95-a5da-744271845518-z9nh5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:16:38.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2319" for this suite. • [SLOW TEST:10.182 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":18,"skipped":373,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:16:38.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:16:38.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83" in namespace "downward-api-4676" to be "success or failure" Jun 7 21:16:38.278: INFO: Pod "downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83": Phase="Pending", Reason="", readiness=false. Elapsed: 41.920791ms Jun 7 21:16:40.283: INFO: Pod "downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046274824s Jun 7 21:16:42.286: INFO: Pod "downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049644359s STEP: Saw pod success Jun 7 21:16:42.286: INFO: Pod "downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83" satisfied condition "success or failure" Jun 7 21:16:42.288: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83 container client-container: STEP: delete the pod Jun 7 21:16:42.374: INFO: Waiting for pod downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83 to disappear Jun 7 21:16:42.482: INFO: Pod downwardapi-volume-cbb8f159-d2c3-4180-820b-dabb42868b83 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:16:42.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4676" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":375,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:16:42.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 21:16:46.681: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:16:46.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4162" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":385,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:16:46.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-edd3d89d-ee4f-4346-98d1-daaed7171b7c STEP: Creating a pod to test consume secrets Jun 7 21:16:46.822: INFO: Waiting up to 5m0s for pod "pod-secrets-06078123-537f-4dad-947d-f612037c63f6" in namespace "secrets-6633" to be "success or failure" Jun 7 21:16:46.826: INFO: Pod "pod-secrets-06078123-537f-4dad-947d-f612037c63f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606364ms Jun 7 21:16:48.830: INFO: Pod "pod-secrets-06078123-537f-4dad-947d-f612037c63f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007524315s Jun 7 21:16:50.834: INFO: Pod "pod-secrets-06078123-537f-4dad-947d-f612037c63f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011355344s STEP: Saw pod success Jun 7 21:16:50.834: INFO: Pod "pod-secrets-06078123-537f-4dad-947d-f612037c63f6" satisfied condition "success or failure" Jun 7 21:16:50.836: INFO: Trying to get logs from node jerma-worker pod pod-secrets-06078123-537f-4dad-947d-f612037c63f6 container secret-volume-test: STEP: delete the pod Jun 7 21:16:50.885: INFO: Waiting for pod pod-secrets-06078123-537f-4dad-947d-f612037c63f6 to disappear Jun 7 21:16:50.898: INFO: Pod pod-secrets-06078123-537f-4dad-947d-f612037c63f6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:16:50.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6633" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:16:50.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:16:50.958: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:16:55.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9455" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:16:55.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:16:55.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d" in namespace "downward-api-9996" to be "success or failure" Jun 7 21:16:55.157: INFO: Pod "downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.383635ms Jun 7 21:16:57.161: INFO: Pod "downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02431459s Jun 7 21:16:59.165: INFO: Pod "downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028169334s STEP: Saw pod success Jun 7 21:16:59.165: INFO: Pod "downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d" satisfied condition "success or failure" Jun 7 21:16:59.168: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d container client-container: STEP: delete the pod Jun 7 21:16:59.203: INFO: Waiting for pod downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d to disappear Jun 7 21:16:59.211: INFO: Pod downwardapi-volume-5b3f90e4-2048-4279-ad88-0e31841cdc2d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:16:59.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9996" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":480,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:16:59.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Jun 7 21:16:59.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6235 -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 7 21:16:59.412: INFO: stderr: "" Jun 7 21:16:59.412: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jun 7 21:16:59.412: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 7 21:16:59.412: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6235" to be "running and ready, or succeeded" Jun 7 21:16:59.433: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.94575ms Jun 7 21:17:01.441: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028974461s Jun 7 21:17:03.446: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.033484108s Jun 7 21:17:03.446: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 7 21:17:03.446: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 7 21:17:03.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6235' Jun 7 21:17:03.559: INFO: stderr: "" Jun 7 21:17:03.559: INFO: stdout: "I0607 21:17:02.486836 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/zmp 308\nI0607 21:17:02.686988 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n85 253\nI0607 21:17:02.887065 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/mgpc 321\nI0607 21:17:03.087104 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/vkb 434\nI0607 21:17:03.286997 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/6mv6 470\nI0607 21:17:03.487067 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/kqt 451\n" STEP: limiting log lines Jun 7 21:17:03.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6235 --tail=1' Jun 7 21:17:03.681: INFO: stderr: "" Jun 7 21:17:03.681: INFO: stdout: "I0607 21:17:03.487067 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/kqt 451\n" Jun 7 21:17:03.681: INFO: got output "I0607 21:17:03.487067 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/kqt 451\n" STEP: limiting log bytes Jun 7 21:17:03.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6235 --limit-bytes=1' Jun 7 21:17:03.790: INFO: stderr: "" Jun 7 21:17:03.790: INFO: stdout: "I" Jun 7 21:17:03.790: INFO: got output "I" STEP: exposing timestamps Jun 7 21:17:03.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6235 --tail=1 --timestamps' Jun 7 21:17:03.906: INFO: stderr: "" Jun 7 21:17:03.906: INFO: stdout: "2020-06-07T21:17:03.88726166Z I0607 21:17:03.887044 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5vbk 281\n" Jun 7 21:17:03.906: INFO: got output "2020-06-07T21:17:03.88726166Z I0607 21:17:03.887044 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5vbk 281\n" STEP: restricting to a time range Jun 7 21:17:06.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6235 --since=1s' Jun 7 21:17:06.519: INFO: stderr: "" Jun 7 21:17:06.519: INFO: stdout: "I0607 21:17:05.687029 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/xtm 353\nI0607 21:17:05.887075 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/v6b 335\nI0607 21:17:06.087056 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/sjtf 357\nI0607 21:17:06.287081 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/xscw 521\nI0607 21:17:06.487056 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/xd45 594\n" Jun 7 21:17:06.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6235 --since=24h' Jun 7 21:17:06.628: INFO: stderr: "" Jun 7 21:17:06.628: INFO: stdout: "I0607 21:17:02.486836 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/zmp 308\nI0607 21:17:02.686988 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n85 253\nI0607 21:17:02.887065 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/mgpc 321\nI0607 21:17:03.087104 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/vkb 434\nI0607 21:17:03.286997 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/6mv6 470\nI0607 21:17:03.487067 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/kqt 451\nI0607 21:17:03.687045 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/xqz7 562\nI0607 21:17:03.887044 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5vbk 281\nI0607 21:17:04.087079 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/6mg 374\nI0607 21:17:04.286996 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/mpf 384\nI0607 21:17:04.487048 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/4gd 389\nI0607 21:17:04.687081 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/xdrk 309\nI0607 21:17:04.886995 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/bv2 293\nI0607 21:17:05.087039 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/zw6l 385\nI0607 21:17:05.287048 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/vg5s 532\nI0607 21:17:05.487023 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/r5g 283\nI0607 21:17:05.687029 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/xtm 353\nI0607 21:17:05.887075 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/v6b 335\nI0607 21:17:06.087056 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/sjtf 357\nI0607 21:17:06.287081 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/xscw 521\nI0607 21:17:06.487056 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/xd45 594\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Jun 7 21:17:06.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6235' Jun 7 21:17:19.252: INFO: stderr: "" Jun 7 21:17:19.252: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:17:19.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6235" for this suite. • [SLOW TEST:20.041 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":24,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:17:19.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:17:19.930: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:17:22.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161439, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161439, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161440, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161439, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:17:25.245: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:17:25.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3005" for this suite. STEP: Destroying namespace "webhook-3005-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.688 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":25,"skipped":512,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:17:25.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:17:26.025: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1ea2826a-d65b-4afa-840a-c1bd598491e3" in namespace "security-context-test-2370" to be "success or failure" Jun 7 21:17:26.028: INFO: Pod "alpine-nnp-false-1ea2826a-d65b-4afa-840a-c1bd598491e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.587712ms Jun 7 21:17:28.090: INFO: Pod "alpine-nnp-false-1ea2826a-d65b-4afa-840a-c1bd598491e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065255832s Jun 7 21:17:30.095: INFO: Pod "alpine-nnp-false-1ea2826a-d65b-4afa-840a-c1bd598491e3": Phase="Running", Reason="", readiness=true. Elapsed: 4.070138922s Jun 7 21:17:32.100: INFO: Pod "alpine-nnp-false-1ea2826a-d65b-4afa-840a-c1bd598491e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075289998s Jun 7 21:17:32.100: INFO: Pod "alpine-nnp-false-1ea2826a-d65b-4afa-840a-c1bd598491e3" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:17:32.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2370" for this suite. • [SLOW TEST:6.167 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":523,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:17:32.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6912 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 21:17:32.172: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 21:17:54.370: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.201:8080/dial?request=hostname&protocol=udp&host=10.244.1.200&port=8081&tries=1'] Namespace:pod-network-test-6912 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:17:54.370: INFO: >>> kubeConfig: /root/.kube/config I0607 21:17:54.408465 6 log.go:172] (0xc002de4000) (0xc0027b45a0) Create stream I0607 21:17:54.408501 6 log.go:172] (0xc002de4000) (0xc0027b45a0) Stream added, broadcasting: 1 I0607 21:17:54.411249 6 log.go:172] (0xc002de4000) Reply frame received for 1 I0607 21:17:54.411306 6 log.go:172] (0xc002de4000) (0xc001e04000) Create stream I0607 21:17:54.411320 6 log.go:172] (0xc002de4000) (0xc001e04000) Stream added, broadcasting: 3 I0607 21:17:54.412361 6 log.go:172] (0xc002de4000) Reply frame received for 3 I0607 21:17:54.412398 6 log.go:172] (0xc002de4000) (0xc0027b4780) Create stream I0607 21:17:54.412410 6 log.go:172] (0xc002de4000) (0xc0027b4780) Stream added, broadcasting: 5 I0607 21:17:54.413542 6 log.go:172] (0xc002de4000) Reply frame received for 5 I0607 21:17:54.596000 6 log.go:172] (0xc002de4000) Data frame received for 3 I0607 21:17:54.596035 6 log.go:172] (0xc001e04000) (3) Data frame handling I0607 21:17:54.596114 6 log.go:172] (0xc001e04000) (3) Data frame sent I0607 21:17:54.596869 6 log.go:172] (0xc002de4000) Data frame received for 3 I0607 21:17:54.596913 6 log.go:172] (0xc001e04000) (3) Data frame handling I0607 21:17:54.596953 6 log.go:172] (0xc002de4000) Data frame received for 5 I0607 21:17:54.596973 6 log.go:172] (0xc0027b4780) (5) Data frame handling I0607 21:17:54.599614 6 log.go:172] (0xc002de4000) Data frame received for 1 I0607 21:17:54.599639 6 log.go:172] (0xc0027b45a0) (1) Data frame handling I0607 21:17:54.599655 6 log.go:172] (0xc0027b45a0) (1) Data frame sent I0607 21:17:54.599669 6 log.go:172] (0xc002de4000) (0xc0027b45a0) Stream removed, broadcasting: 1 I0607 21:17:54.599688 6 log.go:172] (0xc002de4000) Go away received I0607 21:17:54.600246 6 log.go:172] (0xc002de4000) (0xc0027b45a0) Stream removed, broadcasting: 1 I0607 21:17:54.600286 6 log.go:172] (0xc002de4000) (0xc001e04000) Stream removed, broadcasting: 3 I0607 21:17:54.600321 6 log.go:172] (0xc002de4000) (0xc0027b4780) Stream removed, broadcasting: 5 Jun 7 21:17:54.600: INFO: Waiting for responses: map[] Jun 7 21:17:54.633: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.201:8080/dial?request=hostname&protocol=udp&host=10.244.2.99&port=8081&tries=1'] Namespace:pod-network-test-6912 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:17:54.633: INFO: >>> kubeConfig: /root/.kube/config I0607 21:17:54.667608 6 log.go:172] (0xc002bd6c60) (0xc0023b1540) Create stream I0607 21:17:54.667642 6 log.go:172] (0xc002bd6c60) (0xc0023b1540) Stream added, broadcasting: 1 I0607 21:17:54.670362 6 log.go:172] (0xc002bd6c60) Reply frame received for 1 I0607 21:17:54.670404 6 log.go:172] (0xc002bd6c60) (0xc0027b48c0) Create stream I0607 21:17:54.670422 6 log.go:172] (0xc002bd6c60) (0xc0027b48c0) Stream added, broadcasting: 3 I0607 21:17:54.671584 6 log.go:172] (0xc002bd6c60) Reply frame received for 3 I0607 21:17:54.671648 6 log.go:172] (0xc002bd6c60) (0xc001e04140) Create stream I0607 21:17:54.671693 6 log.go:172] (0xc002bd6c60) (0xc001e04140) Stream added, broadcasting: 5 I0607 21:17:54.672627 6 log.go:172] (0xc002bd6c60) Reply frame received for 5 I0607 21:17:54.748438 6 log.go:172] (0xc002bd6c60) Data frame received for 3 I0607 21:17:54.748463 6 log.go:172] (0xc0027b48c0) (3) Data frame handling I0607 21:17:54.748480 6 log.go:172] (0xc0027b48c0) (3) Data frame sent I0607 21:17:54.749674 6 log.go:172] (0xc002bd6c60) Data frame received for 3 I0607 21:17:54.749696 6 log.go:172] (0xc0027b48c0) (3) Data frame handling I0607 21:17:54.749710 6 log.go:172] (0xc002bd6c60) Data frame received for 5 I0607 21:17:54.749722 6 log.go:172] (0xc001e04140) (5) Data frame handling I0607 21:17:54.751491 6 log.go:172] (0xc002bd6c60) Data frame received for 1 I0607 21:17:54.751506 6 log.go:172] (0xc0023b1540) (1) Data frame handling I0607 21:17:54.751518 6 log.go:172] (0xc0023b1540) (1) Data frame sent I0607 21:17:54.751527 6 log.go:172] (0xc002bd6c60) (0xc0023b1540) Stream removed, broadcasting: 1 I0607 21:17:54.751556 6 log.go:172] (0xc002bd6c60) Go away received I0607 21:17:54.751599 6 log.go:172] (0xc002bd6c60) (0xc0023b1540) Stream removed, broadcasting: 1 I0607 21:17:54.751609 6 log.go:172] (0xc002bd6c60) (0xc0027b48c0) Stream removed, broadcasting: 3 I0607 21:17:54.751616 6 log.go:172] (0xc002bd6c60) (0xc001e04140) Stream removed, broadcasting: 5 Jun 7 21:17:54.751: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:17:54.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6912" for this suite. • [SLOW TEST:22.641 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":539,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:17:54.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 7 21:18:04.861: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:04.886: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:06.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:06.891: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:08.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:08.891: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:10.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:10.890: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:12.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:12.891: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:14.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:14.891: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:16.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:16.895: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:18.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:18.891: INFO: Pod pod-with-poststart-exec-hook still exists Jun 7 21:18:20.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 7 21:18:20.890: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:18:20.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1211" for this suite. • [SLOW TEST:26.139 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":549,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:18:20.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:18:21.369: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:18:23.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161501, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161501, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161501, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161501, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:18:26.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:18:36.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8460" for this suite. STEP: Destroying namespace "webhook-8460-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.124 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":29,"skipped":553,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:18:37.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:18:37.941: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:18:40.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161517, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161517, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161517, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161517, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:18:43.070: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:18:43.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2753" for this suite. STEP: Destroying namespace "webhook-2753-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.366 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":30,"skipped":556,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:18:43.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 7 21:18:48.042: INFO: Successfully updated pod "annotationupdate63febaab-628a-4a55-bd37-5f68321e5de8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:18:52.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2744" for this suite. • [SLOW TEST:8.722 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":567,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:18:52.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 7 21:18:52.223: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523136 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 21:18:52.223: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523136 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 7 21:19:02.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523179 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 7 21:19:02.231: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523179 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 7 21:19:12.238: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523211 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 21:19:12.238: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523211 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 7 21:19:22.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523241 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 21:19:22.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-a 571f5eb6-ff64-42e0-abb5-537a574b7e0b 22523241 0 2020-06-07 21:18:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 7 21:19:32.254: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-b b29cd411-6b05-48dc-a4bc-cc60be87a058 22523271 0 2020-06-07 21:19:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 21:19:32.254: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-b b29cd411-6b05-48dc-a4bc-cc60be87a058 22523271 0 2020-06-07 21:19:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 7 21:19:42.262: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-b b29cd411-6b05-48dc-a4bc-cc60be87a058 22523301 0 2020-06-07 21:19:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 21:19:42.262: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-configmap-b b29cd411-6b05-48dc-a4bc-cc60be87a058 22523301 0 2020-06-07 21:19:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:19:52.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6071" for this suite. • [SLOW TEST:60.160 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":32,"skipped":577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:19:52.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9256 STEP: creating replication controller nodeport-test in namespace services-9256 I0607 21:19:52.422709 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9256, replica count: 2 I0607 21:19:55.473312 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:19:58.473613 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 7 21:19:58.473: INFO: Creating new exec pod Jun 7 21:20:03.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9256 execpodbgvrv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jun 7 21:20:06.606: INFO: stderr: "I0607 21:20:06.502758 560 log.go:172] (0xc0000f4f20) (0xc0006e7ea0) Create stream\nI0607 21:20:06.502804 560 log.go:172] (0xc0000f4f20) (0xc0006e7ea0) Stream added, broadcasting: 1\nI0607 21:20:06.506983 560 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0607 21:20:06.507109 560 log.go:172] (0xc0000f4f20) (0xc0006486e0) Create stream\nI0607 21:20:06.507137 560 log.go:172] (0xc0000f4f20) (0xc0006486e0) Stream added, broadcasting: 3\nI0607 21:20:06.508402 560 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0607 21:20:06.508443 560 log.go:172] (0xc0000f4f20) (0xc0009180a0) Create stream\nI0607 21:20:06.508457 560 log.go:172] (0xc0000f4f20) (0xc0009180a0) Stream added, broadcasting: 5\nI0607 21:20:06.509707 560 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0607 21:20:06.595742 560 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0607 21:20:06.595767 560 log.go:172] (0xc0009180a0) (5) Data frame handling\nI0607 21:20:06.595780 560 log.go:172] (0xc0009180a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0607 21:20:06.596583 560 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0607 21:20:06.596610 560 log.go:172] (0xc0009180a0) (5) Data frame handling\nI0607 21:20:06.596638 560 log.go:172] (0xc0009180a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0607 21:20:06.596828 560 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0607 21:20:06.596859 560 log.go:172] (0xc0009180a0) (5) Data frame handling\nI0607 21:20:06.596885 560 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0607 21:20:06.596909 560 log.go:172] (0xc0006486e0) (3) Data frame handling\nI0607 21:20:06.599110 560 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0607 21:20:06.599141 560 log.go:172] (0xc0006e7ea0) (1) Data frame handling\nI0607 21:20:06.599173 560 log.go:172] (0xc0006e7ea0) (1) Data frame sent\nI0607 21:20:06.599207 560 log.go:172] (0xc0000f4f20) (0xc0006e7ea0) Stream removed, broadcasting: 1\nI0607 21:20:06.599230 560 log.go:172] (0xc0000f4f20) Go away received\nI0607 21:20:06.599593 560 log.go:172] (0xc0000f4f20) (0xc0006e7ea0) Stream removed, broadcasting: 1\nI0607 21:20:06.599608 560 log.go:172] (0xc0000f4f20) (0xc0006486e0) Stream removed, broadcasting: 3\nI0607 21:20:06.599615 560 log.go:172] (0xc0000f4f20) (0xc0009180a0) Stream removed, broadcasting: 5\n" Jun 7 21:20:06.606: INFO: stdout: "" Jun 7 21:20:06.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9256 execpodbgvrv -- /bin/sh -x -c nc -zv -t -w 2 10.108.68.136 80' Jun 7 21:20:06.814: INFO: stderr: "I0607 21:20:06.744779 593 log.go:172] (0xc00091a160) (0xc0007fda40) Create stream\nI0607 21:20:06.744861 593 log.go:172] (0xc00091a160) (0xc0007fda40) Stream added, broadcasting: 1\nI0607 21:20:06.747170 593 log.go:172] (0xc00091a160) Reply frame received for 1\nI0607 21:20:06.747213 593 log.go:172] (0xc00091a160) (0xc0007fdc20) Create stream\nI0607 21:20:06.747224 593 log.go:172] (0xc00091a160) (0xc0007fdc20) Stream added, broadcasting: 3\nI0607 21:20:06.748362 593 log.go:172] (0xc00091a160) Reply frame received for 3\nI0607 21:20:06.748406 593 log.go:172] (0xc00091a160) (0xc000aa0000) Create stream\nI0607 21:20:06.748419 593 log.go:172] (0xc00091a160) (0xc000aa0000) Stream added, broadcasting: 5\nI0607 21:20:06.749582 593 log.go:172] (0xc00091a160) Reply frame received for 5\nI0607 21:20:06.806363 593 log.go:172] (0xc00091a160) Data frame received for 3\nI0607 21:20:06.806393 593 log.go:172] (0xc0007fdc20) (3) Data frame handling\nI0607 21:20:06.806415 593 log.go:172] (0xc00091a160) Data frame received for 5\nI0607 21:20:06.806423 593 log.go:172] (0xc000aa0000) (5) Data frame handling\nI0607 21:20:06.806432 593 log.go:172] (0xc000aa0000) (5) Data frame sent\nI0607 21:20:06.806443 593 log.go:172] (0xc00091a160) Data frame received for 5\nI0607 21:20:06.806454 593 log.go:172] (0xc000aa0000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.68.136 80\nConnection to 10.108.68.136 80 port [tcp/http] succeeded!\nI0607 21:20:06.807802 593 log.go:172] (0xc00091a160) Data frame received for 1\nI0607 21:20:06.807834 593 log.go:172] (0xc0007fda40) (1) Data frame handling\nI0607 21:20:06.807878 593 log.go:172] (0xc0007fda40) (1) Data frame sent\nI0607 21:20:06.807899 593 log.go:172] (0xc00091a160) (0xc0007fda40) Stream removed, broadcasting: 1\nI0607 21:20:06.807916 593 log.go:172] (0xc00091a160) Go away received\nI0607 21:20:06.808195 593 log.go:172] (0xc00091a160) (0xc0007fda40) Stream removed, broadcasting: 1\nI0607 21:20:06.808212 593 log.go:172] (0xc00091a160) (0xc0007fdc20) Stream removed, broadcasting: 3\nI0607 21:20:06.808222 593 log.go:172] (0xc00091a160) (0xc000aa0000) Stream removed, broadcasting: 5\n" Jun 7 21:20:06.814: INFO: stdout: "" Jun 7 21:20:06.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9256 execpodbgvrv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31583' Jun 7 21:20:07.021: INFO: stderr: "I0607 21:20:06.933064 613 log.go:172] (0xc000675080) (0xc0005d7a40) Create stream\nI0607 21:20:06.933289 613 log.go:172] (0xc000675080) (0xc0005d7a40) Stream added, broadcasting: 1\nI0607 21:20:06.936068 613 log.go:172] (0xc000675080) Reply frame received for 1\nI0607 21:20:06.936112 613 log.go:172] (0xc000675080) (0xc00061c000) Create stream\nI0607 21:20:06.936127 613 log.go:172] (0xc000675080) (0xc00061c000) Stream added, broadcasting: 3\nI0607 21:20:06.937027 613 log.go:172] (0xc000675080) Reply frame received for 3\nI0607 21:20:06.937081 613 log.go:172] (0xc000675080) (0xc0003a2000) Create stream\nI0607 21:20:06.937098 613 log.go:172] (0xc000675080) (0xc0003a2000) Stream added, broadcasting: 5\nI0607 21:20:06.938146 613 log.go:172] (0xc000675080) Reply frame received for 5\nI0607 21:20:07.012733 613 log.go:172] (0xc000675080) Data frame received for 3\nI0607 21:20:07.012764 613 log.go:172] (0xc00061c000) (3) Data frame handling\nI0607 21:20:07.012781 613 log.go:172] (0xc000675080) Data frame received for 5\nI0607 21:20:07.012786 613 log.go:172] (0xc0003a2000) (5) Data frame handling\nI0607 21:20:07.012792 613 log.go:172] (0xc0003a2000) (5) Data frame sent\nI0607 21:20:07.012797 613 log.go:172] (0xc000675080) Data frame received for 5\nI0607 21:20:07.012801 613 log.go:172] (0xc0003a2000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31583\nConnection to 172.17.0.10 31583 port [tcp/31583] succeeded!\nI0607 21:20:07.014584 613 log.go:172] (0xc000675080) Data frame received for 1\nI0607 21:20:07.014620 613 log.go:172] (0xc0005d7a40) (1) Data frame handling\nI0607 21:20:07.014635 613 log.go:172] (0xc0005d7a40) (1) Data frame sent\nI0607 21:20:07.014656 613 log.go:172] (0xc000675080) (0xc0005d7a40) Stream removed, broadcasting: 1\nI0607 21:20:07.014678 613 log.go:172] (0xc000675080) Go away received\nI0607 21:20:07.015079 613 log.go:172] (0xc000675080) (0xc0005d7a40) Stream removed, broadcasting: 1\nI0607 21:20:07.015099 613 log.go:172] (0xc000675080) (0xc00061c000) Stream removed, broadcasting: 3\nI0607 21:20:07.015110 613 log.go:172] (0xc000675080) (0xc0003a2000) Stream removed, broadcasting: 5\n" Jun 7 21:20:07.021: INFO: stdout: "" Jun 7 21:20:07.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9256 execpodbgvrv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31583' Jun 7 21:20:07.228: INFO: stderr: "I0607 21:20:07.157633 635 log.go:172] (0xc0005a5290) (0xc0009f61e0) Create stream\nI0607 21:20:07.157687 635 log.go:172] (0xc0005a5290) (0xc0009f61e0) Stream added, broadcasting: 1\nI0607 21:20:07.160261 635 log.go:172] (0xc0005a5290) Reply frame received for 1\nI0607 21:20:07.160309 635 log.go:172] (0xc0005a5290) (0xc0009f6280) Create stream\nI0607 21:20:07.160325 635 log.go:172] (0xc0005a5290) (0xc0009f6280) Stream added, broadcasting: 3\nI0607 21:20:07.161305 635 log.go:172] (0xc0005a5290) Reply frame received for 3\nI0607 21:20:07.161334 635 log.go:172] (0xc0005a5290) (0xc0009f63c0) Create stream\nI0607 21:20:07.161343 635 log.go:172] (0xc0005a5290) (0xc0009f63c0) Stream added, broadcasting: 5\nI0607 21:20:07.162329 635 log.go:172] (0xc0005a5290) Reply frame received for 5\nI0607 21:20:07.220782 635 log.go:172] (0xc0005a5290) Data frame received for 5\nI0607 21:20:07.220839 635 log.go:172] (0xc0009f63c0) (5) Data frame handling\nI0607 21:20:07.220856 635 log.go:172] (0xc0009f63c0) (5) Data frame sent\nI0607 21:20:07.220868 635 log.go:172] (0xc0005a5290) Data frame received for 5\nI0607 21:20:07.220877 635 log.go:172] (0xc0009f63c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31583\nConnection to 172.17.0.8 31583 port [tcp/31583] succeeded!\nI0607 21:20:07.220960 635 log.go:172] (0xc0005a5290) Data frame received for 3\nI0607 21:20:07.221023 635 log.go:172] (0xc0009f6280) (3) Data frame handling\nI0607 21:20:07.222628 635 log.go:172] (0xc0005a5290) Data frame received for 1\nI0607 21:20:07.222651 635 log.go:172] (0xc0009f61e0) (1) Data frame handling\nI0607 21:20:07.222659 635 log.go:172] (0xc0009f61e0) (1) Data frame sent\nI0607 21:20:07.222668 635 log.go:172] (0xc0005a5290) (0xc0009f61e0) Stream removed, broadcasting: 1\nI0607 21:20:07.222681 635 log.go:172] (0xc0005a5290) Go away received\nI0607 21:20:07.223101 635 log.go:172] (0xc0005a5290) (0xc0009f61e0) Stream removed, broadcasting: 1\nI0607 21:20:07.223124 635 log.go:172] (0xc0005a5290) (0xc0009f6280) Stream removed, broadcasting: 3\nI0607 21:20:07.223133 635 log.go:172] (0xc0005a5290) (0xc0009f63c0) Stream removed, broadcasting: 5\n" Jun 7 21:20:07.228: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:20:07.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9256" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.962 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":33,"skipped":629,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:20:07.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:20:07.281: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7252 I0607 21:20:07.305915 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7252, replica count: 1 I0607 21:20:08.356445 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:20:09.356639 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:20:10.356898 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:20:11.357334 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 7 21:20:11.496: INFO: Created: latency-svc-4nnsc Jun 7 21:20:11.530: INFO: Got endpoints: latency-svc-4nnsc [73.008498ms] Jun 7 21:20:11.594: INFO: Created: latency-svc-9rzmq Jun 7 21:20:11.600: INFO: Got endpoints: latency-svc-9rzmq [69.479959ms] Jun 7 21:20:11.627: INFO: Created: latency-svc-bg4pm Jun 7 21:20:11.725: INFO: Got endpoints: latency-svc-bg4pm [194.474595ms] Jun 7 21:20:11.735: INFO: Created: latency-svc-k6flc Jun 7 21:20:11.758: INFO: Got endpoints: latency-svc-k6flc [227.300166ms] Jun 7 21:20:11.778: INFO: Created: latency-svc-m5pq8 Jun 7 21:20:11.799: INFO: Got endpoints: latency-svc-m5pq8 [269.010874ms] Jun 7 21:20:11.863: INFO: Created: latency-svc-4mvfz Jun 7 21:20:11.890: INFO: Got endpoints: latency-svc-4mvfz [359.37089ms] Jun 7 21:20:11.928: INFO: Created: latency-svc-qrwxj Jun 7 21:20:11.949: INFO: Got endpoints: latency-svc-qrwxj [418.990318ms] Jun 7 21:20:12.018: INFO: Created: latency-svc-c8jpk Jun 7 21:20:12.027: INFO: Got endpoints: latency-svc-c8jpk [496.492251ms] Jun 7 21:20:12.067: INFO: Created: latency-svc-dx6hd Jun 7 21:20:12.094: INFO: Got endpoints: latency-svc-dx6hd [563.484212ms] Jun 7 21:20:12.149: INFO: Created: latency-svc-x4jgn Jun 7 21:20:12.180: INFO: Got endpoints: latency-svc-x4jgn [649.707198ms] Jun 7 21:20:12.234: INFO: Created: latency-svc-7pft4 Jun 7 21:20:12.238: INFO: Got endpoints: latency-svc-7pft4 [707.585337ms] Jun 7 21:20:12.291: INFO: Created: latency-svc-r5897 Jun 7 21:20:12.298: INFO: Got endpoints: latency-svc-r5897 [767.186799ms] Jun 7 21:20:12.317: INFO: Created: latency-svc-vlsx6 Jun 7 21:20:12.335: INFO: Got endpoints: latency-svc-vlsx6 [804.23312ms] Jun 7 21:20:12.371: INFO: Created: latency-svc-nnzx9 Jun 7 21:20:12.415: INFO: Got endpoints: latency-svc-nnzx9 [884.100725ms] Jun 7 21:20:12.426: INFO: Created: latency-svc-qrwg4 Jun 7 21:20:12.443: INFO: Got endpoints: latency-svc-qrwg4 [912.344037ms] Jun 7 21:20:12.461: INFO: Created: latency-svc-9pr4k Jun 7 21:20:12.479: INFO: Got endpoints: latency-svc-9pr4k [948.764671ms] Jun 7 21:20:12.570: INFO: Created: latency-svc-srtv4 Jun 7 21:20:12.599: INFO: Got endpoints: latency-svc-srtv4 [998.988489ms] Jun 7 21:20:12.600: INFO: Created: latency-svc-g4t47 Jun 7 21:20:12.618: INFO: Got endpoints: latency-svc-g4t47 [892.590314ms] Jun 7 21:20:12.659: INFO: Created: latency-svc-c5rpv Jun 7 21:20:12.708: INFO: Got endpoints: latency-svc-c5rpv [950.326126ms] Jun 7 21:20:12.755: INFO: Created: latency-svc-8jk48 Jun 7 21:20:12.770: INFO: Got endpoints: latency-svc-8jk48 [970.628586ms] Jun 7 21:20:12.803: INFO: Created: latency-svc-7tz2l Jun 7 21:20:12.839: INFO: Got endpoints: latency-svc-7tz2l [949.155852ms] Jun 7 21:20:12.875: INFO: Created: latency-svc-6qz67 Jun 7 21:20:12.888: INFO: Got endpoints: latency-svc-6qz67 [938.260213ms] Jun 7 21:20:12.911: INFO: Created: latency-svc-7bw7q Jun 7 21:20:12.925: INFO: Got endpoints: latency-svc-7bw7q [897.470093ms] Jun 7 21:20:12.977: INFO: Created: latency-svc-p68r7 Jun 7 21:20:12.980: INFO: Got endpoints: latency-svc-p68r7 [885.87818ms] Jun 7 21:20:13.007: INFO: Created: latency-svc-rjpgv Jun 7 21:20:13.021: INFO: Got endpoints: latency-svc-rjpgv [840.872231ms] Jun 7 21:20:13.043: INFO: Created: latency-svc-tdzsk Jun 7 21:20:13.144: INFO: Got endpoints: latency-svc-tdzsk [906.069902ms] Jun 7 21:20:13.181: INFO: Created: latency-svc-5wxds Jun 7 21:20:13.196: INFO: Got endpoints: latency-svc-5wxds [897.953946ms] Jun 7 21:20:13.218: INFO: Created: latency-svc-wprtl Jun 7 21:20:13.226: INFO: Got endpoints: latency-svc-wprtl [891.191225ms] Jun 7 21:20:13.318: INFO: Created: latency-svc-lvq5r Jun 7 21:20:13.328: INFO: Got endpoints: latency-svc-lvq5r [912.834304ms] Jun 7 21:20:13.379: INFO: Created: latency-svc-zc9gt Jun 7 21:20:13.395: INFO: Got endpoints: latency-svc-zc9gt [951.739267ms] Jun 7 21:20:13.462: INFO: Created: latency-svc-kzsw4 Jun 7 21:20:13.466: INFO: Got endpoints: latency-svc-kzsw4 [987.036217ms] Jun 7 21:20:13.497: INFO: Created: latency-svc-wwbpb Jun 7 21:20:13.508: INFO: Got endpoints: latency-svc-wwbpb [909.320891ms] Jun 7 21:20:13.529: INFO: Created: latency-svc-tp4fw Jun 7 21:20:13.539: INFO: Got endpoints: latency-svc-tp4fw [920.939366ms] Jun 7 21:20:13.559: INFO: Created: latency-svc-7zrkk Jun 7 21:20:13.600: INFO: Got endpoints: latency-svc-7zrkk [891.917459ms] Jun 7 21:20:13.624: INFO: Created: latency-svc-s47l8 Jun 7 21:20:13.642: INFO: Got endpoints: latency-svc-s47l8 [871.860822ms] Jun 7 21:20:13.673: INFO: Created: latency-svc-m72cn Jun 7 21:20:13.690: INFO: Got endpoints: latency-svc-m72cn [851.052694ms] Jun 7 21:20:13.739: INFO: Created: latency-svc-r5njj Jun 7 21:20:13.763: INFO: Got endpoints: latency-svc-r5njj [875.114378ms] Jun 7 21:20:13.793: INFO: Created: latency-svc-k72f4 Jun 7 21:20:13.823: INFO: Got endpoints: latency-svc-k72f4 [898.626082ms] Jun 7 21:20:13.875: INFO: Created: latency-svc-9zszj Jun 7 21:20:13.888: INFO: Got endpoints: latency-svc-9zszj [908.209884ms] Jun 7 21:20:13.919: INFO: Created: latency-svc-9hxhs Jun 7 21:20:13.948: INFO: Got endpoints: latency-svc-9hxhs [927.131238ms] Jun 7 21:20:14.024: INFO: Created: latency-svc-8dptt Jun 7 21:20:14.028: INFO: Got endpoints: latency-svc-8dptt [883.223311ms] Jun 7 21:20:14.062: INFO: Created: latency-svc-9lqnw Jun 7 21:20:14.093: INFO: Got endpoints: latency-svc-9lqnw [896.613159ms] Jun 7 21:20:14.123: INFO: Created: latency-svc-c9cmc Jun 7 21:20:14.162: INFO: Got endpoints: latency-svc-c9cmc [936.317611ms] Jun 7 21:20:14.195: INFO: Created: latency-svc-jd9cr Jun 7 21:20:14.207: INFO: Got endpoints: latency-svc-jd9cr [879.701248ms] Jun 7 21:20:14.231: INFO: Created: latency-svc-vctkz Jun 7 21:20:14.244: INFO: Got endpoints: latency-svc-vctkz [849.22861ms] Jun 7 21:20:14.330: INFO: Created: latency-svc-dg2c6 Jun 7 21:20:14.334: INFO: Got endpoints: latency-svc-dg2c6 [867.538942ms] Jun 7 21:20:14.363: INFO: Created: latency-svc-qf9w8 Jun 7 21:20:14.376: INFO: Got endpoints: latency-svc-qf9w8 [867.657264ms] Jun 7 21:20:14.399: INFO: Created: latency-svc-ls8bf Jun 7 21:20:14.462: INFO: Got endpoints: latency-svc-ls8bf [922.951599ms] Jun 7 21:20:14.495: INFO: Created: latency-svc-8smfg Jun 7 21:20:14.518: INFO: Got endpoints: latency-svc-8smfg [918.456938ms] Jun 7 21:20:14.555: INFO: Created: latency-svc-qwdmm Jun 7 21:20:14.617: INFO: Got endpoints: latency-svc-qwdmm [975.115062ms] Jun 7 21:20:14.620: INFO: Created: latency-svc-c89bc Jun 7 21:20:14.635: INFO: Got endpoints: latency-svc-c89bc [945.231365ms] Jun 7 21:20:14.675: INFO: Created: latency-svc-4p2wl Jun 7 21:20:14.702: INFO: Got endpoints: latency-svc-4p2wl [938.824272ms] Jun 7 21:20:14.755: INFO: Created: latency-svc-v6v6t Jun 7 21:20:14.776: INFO: Got endpoints: latency-svc-v6v6t [952.509445ms] Jun 7 21:20:14.812: INFO: Created: latency-svc-7vh5m Jun 7 21:20:14.834: INFO: Got endpoints: latency-svc-7vh5m [945.690925ms] Jun 7 21:20:14.893: INFO: Created: latency-svc-vx8sh Jun 7 21:20:14.920: INFO: Got endpoints: latency-svc-vx8sh [971.866762ms] Jun 7 21:20:15.246: INFO: Created: latency-svc-p6j5r Jun 7 21:20:15.391: INFO: Got endpoints: latency-svc-p6j5r [1.36318653s] Jun 7 21:20:15.443: INFO: Created: latency-svc-wkbph Jun 7 21:20:15.448: INFO: Got endpoints: latency-svc-wkbph [1.354977769s] Jun 7 21:20:15.475: INFO: Created: latency-svc-rbz8t Jun 7 21:20:15.569: INFO: Got endpoints: latency-svc-rbz8t [1.407193931s] Jun 7 21:20:15.571: INFO: Created: latency-svc-scjw6 Jun 7 21:20:15.958: INFO: Got endpoints: latency-svc-scjw6 [1.75079879s] Jun 7 21:20:16.110: INFO: Created: latency-svc-5frhl Jun 7 21:20:16.126: INFO: Got endpoints: latency-svc-5frhl [1.881568883s] Jun 7 21:20:16.156: INFO: Created: latency-svc-r67cx Jun 7 21:20:16.173: INFO: Got endpoints: latency-svc-r67cx [1.839335054s] Jun 7 21:20:16.205: INFO: Created: latency-svc-j52f7 Jun 7 21:20:16.264: INFO: Got endpoints: latency-svc-j52f7 [1.887666091s] Jun 7 21:20:16.312: INFO: Created: latency-svc-lfqj6 Jun 7 21:20:16.342: INFO: Got endpoints: latency-svc-lfqj6 [1.880292862s] Jun 7 21:20:16.397: INFO: Created: latency-svc-dlgwn Jun 7 21:20:16.414: INFO: Got endpoints: latency-svc-dlgwn [1.895615595s] Jun 7 21:20:16.462: INFO: Created: latency-svc-52g8r Jun 7 21:20:16.474: INFO: Got endpoints: latency-svc-52g8r [1.85730183s] Jun 7 21:20:16.492: INFO: Created: latency-svc-ks5nd Jun 7 21:20:16.552: INFO: Got endpoints: latency-svc-ks5nd [1.916644601s] Jun 7 21:20:16.553: INFO: Created: latency-svc-mqfnz Jun 7 21:20:16.559: INFO: Got endpoints: latency-svc-mqfnz [1.856778219s] Jun 7 21:20:16.582: INFO: Created: latency-svc-7qxx2 Jun 7 21:20:16.602: INFO: Got endpoints: latency-svc-7qxx2 [1.825495677s] Jun 7 21:20:16.636: INFO: Created: latency-svc-hq5d8 Jun 7 21:20:16.652: INFO: Got endpoints: latency-svc-hq5d8 [1.817773429s] Jun 7 21:20:16.726: INFO: Created: latency-svc-pqffc Jun 7 21:20:16.740: INFO: Got endpoints: latency-svc-pqffc [1.819578724s] Jun 7 21:20:16.781: INFO: Created: latency-svc-2w8lk Jun 7 21:20:16.810: INFO: Got endpoints: latency-svc-2w8lk [1.419411818s] Jun 7 21:20:16.869: INFO: Created: latency-svc-r8l6w Jun 7 21:20:16.879: INFO: Got endpoints: latency-svc-r8l6w [1.430901913s] Jun 7 21:20:16.905: INFO: Created: latency-svc-w6sdq Jun 7 21:20:16.927: INFO: Got endpoints: latency-svc-w6sdq [1.357132862s] Jun 7 21:20:17.142: INFO: Created: latency-svc-vsxlg Jun 7 21:20:17.291: INFO: Got endpoints: latency-svc-vsxlg [1.332620012s] Jun 7 21:20:17.472: INFO: Created: latency-svc-rwft2 Jun 7 21:20:17.479: INFO: Got endpoints: latency-svc-rwft2 [1.353406491s] Jun 7 21:20:17.513: INFO: Created: latency-svc-62bkp Jun 7 21:20:17.520: INFO: Got endpoints: latency-svc-62bkp [1.346895939s] Jun 7 21:20:17.543: INFO: Created: latency-svc-nxvff Jun 7 21:20:17.551: INFO: Got endpoints: latency-svc-nxvff [1.287265802s] Jun 7 21:20:17.605: INFO: Created: latency-svc-8pnrq Jun 7 21:20:17.617: INFO: Got endpoints: latency-svc-8pnrq [1.275083276s] Jun 7 21:20:17.659: INFO: Created: latency-svc-hg5gl Jun 7 21:20:17.870: INFO: Got endpoints: latency-svc-hg5gl [1.455664125s] Jun 7 21:20:17.891: INFO: Created: latency-svc-dfxrz Jun 7 21:20:18.024: INFO: Got endpoints: latency-svc-dfxrz [1.549673852s] Jun 7 21:20:18.060: INFO: Created: latency-svc-jmlz4 Jun 7 21:20:18.073: INFO: Got endpoints: latency-svc-jmlz4 [1.52130404s] Jun 7 21:20:18.101: INFO: Created: latency-svc-bb6v5 Jun 7 21:20:18.181: INFO: Got endpoints: latency-svc-bb6v5 [1.622202819s] Jun 7 21:20:18.215: INFO: Created: latency-svc-zl2xj Jun 7 21:20:18.230: INFO: Got endpoints: latency-svc-zl2xj [1.628810881s] Jun 7 21:20:18.271: INFO: Created: latency-svc-kghf2 Jun 7 21:20:18.336: INFO: Got endpoints: latency-svc-kghf2 [1.684361458s] Jun 7 21:20:18.422: INFO: Created: latency-svc-88flr Jun 7 21:20:18.435: INFO: Got endpoints: latency-svc-88flr [1.694567317s] Jun 7 21:20:18.510: INFO: Created: latency-svc-z896s Jun 7 21:20:18.519: INFO: Got endpoints: latency-svc-z896s [1.708954581s] Jun 7 21:20:18.546: INFO: Created: latency-svc-p7w5h Jun 7 21:20:18.579: INFO: Got endpoints: latency-svc-p7w5h [1.700804103s] Jun 7 21:20:18.661: INFO: Created: latency-svc-bcqdv Jun 7 21:20:18.670: INFO: Got endpoints: latency-svc-bcqdv [1.742975701s] Jun 7 21:20:18.733: INFO: Created: latency-svc-9qplx Jun 7 21:20:18.748: INFO: Got endpoints: latency-svc-9qplx [1.457084136s] Jun 7 21:20:18.804: INFO: Created: latency-svc-vg2lp Jun 7 21:20:18.814: INFO: Got endpoints: latency-svc-vg2lp [1.335005973s] Jun 7 21:20:18.841: INFO: Created: latency-svc-4dfbw Jun 7 21:20:18.856: INFO: Got endpoints: latency-svc-4dfbw [1.336017699s] Jun 7 21:20:18.875: INFO: Created: latency-svc-b2c2c Jun 7 21:20:18.893: INFO: Got endpoints: latency-svc-b2c2c [1.341810327s] Jun 7 21:20:18.953: INFO: Created: latency-svc-78q7x Jun 7 21:20:18.962: INFO: Got endpoints: latency-svc-78q7x [1.344746399s] Jun 7 21:20:19.008: INFO: Created: latency-svc-qflxj Jun 7 21:20:19.026: INFO: Got endpoints: latency-svc-qflxj [1.155740642s] Jun 7 21:20:19.050: INFO: Created: latency-svc-9njts Jun 7 21:20:19.097: INFO: Got endpoints: latency-svc-9njts [1.072288461s] Jun 7 21:20:19.115: INFO: Created: latency-svc-dlt9r Jun 7 21:20:19.127: INFO: Got endpoints: latency-svc-dlt9r [1.053984647s] Jun 7 21:20:19.157: INFO: Created: latency-svc-6f59l Jun 7 21:20:19.170: INFO: Got endpoints: latency-svc-6f59l [989.148025ms] Jun 7 21:20:19.194: INFO: Created: latency-svc-n9k25 Jun 7 21:20:19.234: INFO: Got endpoints: latency-svc-n9k25 [1.003534651s] Jun 7 21:20:19.247: INFO: Created: latency-svc-97t5p Jun 7 21:20:19.261: INFO: Got endpoints: latency-svc-97t5p [924.749531ms] Jun 7 21:20:19.283: INFO: Created: latency-svc-94752 Jun 7 21:20:19.300: INFO: Got endpoints: latency-svc-94752 [864.930132ms] Jun 7 21:20:19.326: INFO: Created: latency-svc-pcwbw Jun 7 21:20:19.372: INFO: Got endpoints: latency-svc-pcwbw [852.667059ms] Jun 7 21:20:19.386: INFO: Created: latency-svc-4xsws Jun 7 21:20:19.396: INFO: Got endpoints: latency-svc-4xsws [816.523324ms] Jun 7 21:20:19.439: INFO: Created: latency-svc-qgn8k Jun 7 21:20:19.469: INFO: Got endpoints: latency-svc-qgn8k [799.437593ms] Jun 7 21:20:19.539: INFO: Created: latency-svc-d2bkm Jun 7 21:20:19.542: INFO: Got endpoints: latency-svc-d2bkm [793.853688ms] Jun 7 21:20:19.602: INFO: Created: latency-svc-v8lth Jun 7 21:20:19.618: INFO: Got endpoints: latency-svc-v8lth [804.254044ms] Jun 7 21:20:19.678: INFO: Created: latency-svc-jkqnz Jun 7 21:20:19.681: INFO: Got endpoints: latency-svc-jkqnz [824.805118ms] Jun 7 21:20:19.728: INFO: Created: latency-svc-dh9m6 Jun 7 21:20:19.746: INFO: Got endpoints: latency-svc-dh9m6 [852.615426ms] Jun 7 21:20:19.828: INFO: Created: latency-svc-xp7d6 Jun 7 21:20:19.832: INFO: Got endpoints: latency-svc-xp7d6 [869.793264ms] Jun 7 21:20:19.965: INFO: Created: latency-svc-gjvp5 Jun 7 21:20:19.979: INFO: Got endpoints: latency-svc-gjvp5 [953.468126ms] Jun 7 21:20:20.016: INFO: Created: latency-svc-wzcgq Jun 7 21:20:20.046: INFO: Got endpoints: latency-svc-wzcgq [949.30653ms] Jun 7 21:20:20.111: INFO: Created: latency-svc-h42r7 Jun 7 21:20:20.116: INFO: Got endpoints: latency-svc-h42r7 [988.659185ms] Jun 7 21:20:20.171: INFO: Created: latency-svc-tc5lt Jun 7 21:20:20.235: INFO: Got endpoints: latency-svc-tc5lt [1.064396835s] Jun 7 21:20:20.262: INFO: Created: latency-svc-998m7 Jun 7 21:20:20.274: INFO: Got endpoints: latency-svc-998m7 [1.040064226s] Jun 7 21:20:20.303: INFO: Created: latency-svc-xth54 Jun 7 21:20:20.403: INFO: Got endpoints: latency-svc-xth54 [1.141852008s] Jun 7 21:20:20.406: INFO: Created: latency-svc-d6wl6 Jun 7 21:20:20.412: INFO: Got endpoints: latency-svc-d6wl6 [1.112672403s] Jun 7 21:20:20.436: INFO: Created: latency-svc-6h4jn Jun 7 21:20:20.449: INFO: Got endpoints: latency-svc-6h4jn [1.076741973s] Jun 7 21:20:20.472: INFO: Created: latency-svc-wlfct Jun 7 21:20:20.497: INFO: Got endpoints: latency-svc-wlfct [1.101207117s] Jun 7 21:20:20.552: INFO: Created: latency-svc-n4n26 Jun 7 21:20:20.556: INFO: Got endpoints: latency-svc-n4n26 [1.087010478s] Jun 7 21:20:20.579: INFO: Created: latency-svc-sgf5w Jun 7 21:20:20.594: INFO: Got endpoints: latency-svc-sgf5w [1.051847003s] Jun 7 21:20:20.615: INFO: Created: latency-svc-p8pfj Jun 7 21:20:20.630: INFO: Got endpoints: latency-svc-p8pfj [1.011580494s] Jun 7 21:20:20.689: INFO: Created: latency-svc-wfmtk Jun 7 21:20:20.714: INFO: Got endpoints: latency-svc-wfmtk [1.032873145s] Jun 7 21:20:20.765: INFO: Created: latency-svc-klxpc Jun 7 21:20:20.780: INFO: Got endpoints: latency-svc-klxpc [1.03456407s] Jun 7 21:20:20.833: INFO: Created: latency-svc-l9zx5 Jun 7 21:20:20.840: INFO: Got endpoints: latency-svc-l9zx5 [1.008236068s] Jun 7 21:20:20.867: INFO: Created: latency-svc-95rrz Jun 7 21:20:20.883: INFO: Got endpoints: latency-svc-95rrz [903.787942ms] Jun 7 21:20:20.905: INFO: Created: latency-svc-jvnnd Jun 7 21:20:20.913: INFO: Got endpoints: latency-svc-jvnnd [866.637023ms] Jun 7 21:20:20.983: INFO: Created: latency-svc-h22xs Jun 7 21:20:20.998: INFO: Got endpoints: latency-svc-h22xs [881.670323ms] Jun 7 21:20:21.029: INFO: Created: latency-svc-n86tc Jun 7 21:20:21.045: INFO: Got endpoints: latency-svc-n86tc [810.717242ms] Jun 7 21:20:21.065: INFO: Created: latency-svc-4h68s Jun 7 21:20:21.082: INFO: Got endpoints: latency-svc-4h68s [807.623604ms] Jun 7 21:20:21.131: INFO: Created: latency-svc-94gxb Jun 7 21:20:21.162: INFO: Got endpoints: latency-svc-94gxb [758.653286ms] Jun 7 21:20:21.203: INFO: Created: latency-svc-k98rx Jun 7 21:20:21.264: INFO: Got endpoints: latency-svc-k98rx [851.549335ms] Jun 7 21:20:21.266: INFO: Created: latency-svc-lszh7 Jun 7 21:20:21.274: INFO: Got endpoints: latency-svc-lszh7 [825.374954ms] Jun 7 21:20:21.305: INFO: Created: latency-svc-ck6jw Jun 7 21:20:21.323: INFO: Got endpoints: latency-svc-ck6jw [825.553356ms] Jun 7 21:20:21.342: INFO: Created: latency-svc-htjlf Jun 7 21:20:21.359: INFO: Got endpoints: latency-svc-htjlf [802.976858ms] Jun 7 21:20:21.408: INFO: Created: latency-svc-w8lkj Jun 7 21:20:21.425: INFO: Got endpoints: latency-svc-w8lkj [831.411446ms] Jun 7 21:20:21.455: INFO: Created: latency-svc-pxtg2 Jun 7 21:20:21.551: INFO: Got endpoints: latency-svc-pxtg2 [921.102238ms] Jun 7 21:20:21.554: INFO: Created: latency-svc-z4qtl Jun 7 21:20:21.678: INFO: Created: latency-svc-55kgz Jun 7 21:20:21.678: INFO: Got endpoints: latency-svc-z4qtl [964.163352ms] Jun 7 21:20:21.682: INFO: Got endpoints: latency-svc-55kgz [901.830442ms] Jun 7 21:20:21.773: INFO: Created: latency-svc-sngk7 Jun 7 21:20:21.803: INFO: Got endpoints: latency-svc-sngk7 [962.56593ms] Jun 7 21:20:21.814: INFO: Created: latency-svc-jfdf8 Jun 7 21:20:21.828: INFO: Got endpoints: latency-svc-jfdf8 [945.16211ms] Jun 7 21:20:21.851: INFO: Created: latency-svc-lkp59 Jun 7 21:20:21.865: INFO: Got endpoints: latency-svc-lkp59 [952.325331ms] Jun 7 21:20:21.887: INFO: Created: latency-svc-lc5bf Jun 7 21:20:21.935: INFO: Got endpoints: latency-svc-lc5bf [936.787325ms] Jun 7 21:20:21.947: INFO: Created: latency-svc-nz27q Jun 7 21:20:21.973: INFO: Got endpoints: latency-svc-nz27q [927.781526ms] Jun 7 21:20:22.013: INFO: Created: latency-svc-6fl7d Jun 7 21:20:22.028: INFO: Got endpoints: latency-svc-6fl7d [945.706804ms] Jun 7 21:20:22.073: INFO: Created: latency-svc-brdjd Jun 7 21:20:22.082: INFO: Got endpoints: latency-svc-brdjd [920.059986ms] Jun 7 21:20:22.109: INFO: Created: latency-svc-vw9mv Jun 7 21:20:22.124: INFO: Got endpoints: latency-svc-vw9mv [859.849664ms] Jun 7 21:20:22.151: INFO: Created: latency-svc-m7znc Jun 7 21:20:22.172: INFO: Got endpoints: latency-svc-m7znc [897.468696ms] Jun 7 21:20:22.228: INFO: Created: latency-svc-6kgk8 Jun 7 21:20:22.244: INFO: Got endpoints: latency-svc-6kgk8 [921.168362ms] Jun 7 21:20:22.307: INFO: Created: latency-svc-zj8mk Jun 7 21:20:22.327: INFO: Got endpoints: latency-svc-zj8mk [967.802888ms] Jun 7 21:20:22.396: INFO: Created: latency-svc-jjgcl Jun 7 21:20:22.400: INFO: Got endpoints: latency-svc-jjgcl [974.131603ms] Jun 7 21:20:22.463: INFO: Created: latency-svc-grjck Jun 7 21:20:22.478: INFO: Got endpoints: latency-svc-grjck [926.950945ms] Jun 7 21:20:22.542: INFO: Created: latency-svc-g5wt6 Jun 7 21:20:22.545: INFO: Got endpoints: latency-svc-g5wt6 [866.945486ms] Jun 7 21:20:22.576: INFO: Created: latency-svc-dpg64 Jun 7 21:20:22.592: INFO: Got endpoints: latency-svc-dpg64 [909.754264ms] Jun 7 21:20:22.613: INFO: Created: latency-svc-48dm4 Jun 7 21:20:22.628: INFO: Got endpoints: latency-svc-48dm4 [825.699235ms] Jun 7 21:20:22.683: INFO: Created: latency-svc-gqbvf Jun 7 21:20:22.689: INFO: Got endpoints: latency-svc-gqbvf [860.486054ms] Jun 7 21:20:22.732: INFO: Created: latency-svc-t6s58 Jun 7 21:20:22.767: INFO: Got endpoints: latency-svc-t6s58 [901.958252ms] Jun 7 21:20:22.833: INFO: Created: latency-svc-qzqq5 Jun 7 21:20:22.835: INFO: Got endpoints: latency-svc-qzqq5 [900.213634ms] Jun 7 21:20:22.864: INFO: Created: latency-svc-vlmhh Jun 7 21:20:22.895: INFO: Got endpoints: latency-svc-vlmhh [921.883711ms] Jun 7 21:20:22.976: INFO: Created: latency-svc-pgcjm Jun 7 21:20:22.979: INFO: Got endpoints: latency-svc-pgcjm [951.18486ms] Jun 7 21:20:23.008: INFO: Created: latency-svc-58hvp Jun 7 21:20:23.020: INFO: Got endpoints: latency-svc-58hvp [937.938396ms] Jun 7 21:20:23.039: INFO: Created: latency-svc-xpt77 Jun 7 21:20:23.050: INFO: Got endpoints: latency-svc-xpt77 [926.082671ms] Jun 7 21:20:23.069: INFO: Created: latency-svc-g64s9 Jun 7 21:20:23.120: INFO: Got endpoints: latency-svc-g64s9 [948.500113ms] Jun 7 21:20:23.128: INFO: Created: latency-svc-9lvq8 Jun 7 21:20:23.147: INFO: Got endpoints: latency-svc-9lvq8 [902.62032ms] Jun 7 21:20:23.176: INFO: Created: latency-svc-8kf6p Jun 7 21:20:23.189: INFO: Got endpoints: latency-svc-8kf6p [861.744983ms] Jun 7 21:20:23.218: INFO: Created: latency-svc-76dql Jun 7 21:20:23.258: INFO: Got endpoints: latency-svc-76dql [858.475611ms] Jun 7 21:20:23.266: INFO: Created: latency-svc-sdphl Jun 7 21:20:23.280: INFO: Got endpoints: latency-svc-sdphl [801.912687ms] Jun 7 21:20:23.303: INFO: Created: latency-svc-mj9vj Jun 7 21:20:23.316: INFO: Got endpoints: latency-svc-mj9vj [770.404691ms] Jun 7 21:20:23.338: INFO: Created: latency-svc-zkrsm Jun 7 21:20:23.352: INFO: Got endpoints: latency-svc-zkrsm [760.425017ms] Jun 7 21:20:23.408: INFO: Created: latency-svc-58bzj Jun 7 21:20:23.419: INFO: Got endpoints: latency-svc-58bzj [791.096436ms] Jun 7 21:20:23.446: INFO: Created: latency-svc-pjms8 Jun 7 21:20:23.470: INFO: Got endpoints: latency-svc-pjms8 [781.485939ms] Jun 7 21:20:23.500: INFO: Created: latency-svc-49tx8 Jun 7 21:20:23.539: INFO: Got endpoints: latency-svc-49tx8 [772.137029ms] Jun 7 21:20:23.554: INFO: Created: latency-svc-x8kzh Jun 7 21:20:23.569: INFO: Got endpoints: latency-svc-x8kzh [734.229125ms] Jun 7 21:20:23.590: INFO: Created: latency-svc-bgkjt Jun 7 21:20:23.612: INFO: Got endpoints: latency-svc-bgkjt [716.364192ms] Jun 7 21:20:23.678: INFO: Created: latency-svc-pwd5q Jun 7 21:20:23.684: INFO: Got endpoints: latency-svc-pwd5q [704.953877ms] Jun 7 21:20:23.741: INFO: Created: latency-svc-kp6jn Jun 7 21:20:23.756: INFO: Got endpoints: latency-svc-kp6jn [736.468511ms] Jun 7 21:20:23.815: INFO: Created: latency-svc-ftqwp Jun 7 21:20:23.843: INFO: Got endpoints: latency-svc-ftqwp [792.632004ms] Jun 7 21:20:23.845: INFO: Created: latency-svc-gl4gj Jun 7 21:20:23.872: INFO: Got endpoints: latency-svc-gl4gj [751.394919ms] Jun 7 21:20:23.909: INFO: Created: latency-svc-8f4mq Jun 7 21:20:23.952: INFO: Got endpoints: latency-svc-8f4mq [805.518679ms] Jun 7 21:20:23.955: INFO: Created: latency-svc-29bcr Jun 7 21:20:23.979: INFO: Got endpoints: latency-svc-29bcr [790.497531ms] Jun 7 21:20:24.010: INFO: Created: latency-svc-xjlt8 Jun 7 21:20:24.022: INFO: Got endpoints: latency-svc-xjlt8 [763.771295ms] Jun 7 21:20:24.040: INFO: Created: latency-svc-jtg4g Jun 7 21:20:24.084: INFO: Got endpoints: latency-svc-jtg4g [804.115572ms] Jun 7 21:20:24.095: INFO: Created: latency-svc-l4rgq Jun 7 21:20:24.124: INFO: Got endpoints: latency-svc-l4rgq [808.007598ms] Jun 7 21:20:24.160: INFO: Created: latency-svc-8hqwc Jun 7 21:20:24.172: INFO: Got endpoints: latency-svc-8hqwc [819.840542ms] Jun 7 21:20:24.240: INFO: Created: latency-svc-85gj8 Jun 7 21:20:24.251: INFO: Got endpoints: latency-svc-85gj8 [831.004392ms] Jun 7 21:20:24.268: INFO: Created: latency-svc-7cmcr Jun 7 21:20:24.281: INFO: Got endpoints: latency-svc-7cmcr [810.916611ms] Jun 7 21:20:24.304: INFO: Created: latency-svc-2jpb7 Jun 7 21:20:24.318: INFO: Got endpoints: latency-svc-2jpb7 [778.127651ms] Jun 7 21:20:24.378: INFO: Created: latency-svc-nr655 Jun 7 21:20:24.386: INFO: Got endpoints: latency-svc-nr655 [816.311747ms] Jun 7 21:20:24.418: INFO: Created: latency-svc-rmzg6 Jun 7 21:20:24.432: INFO: Got endpoints: latency-svc-rmzg6 [820.058499ms] Jun 7 21:20:24.467: INFO: Created: latency-svc-x4nzq Jun 7 21:20:24.546: INFO: Got endpoints: latency-svc-x4nzq [861.703735ms] Jun 7 21:20:24.549: INFO: Created: latency-svc-l4v7w Jun 7 21:20:24.558: INFO: Got endpoints: latency-svc-l4v7w [801.950534ms] Jun 7 21:20:24.581: INFO: Created: latency-svc-crr84 Jun 7 21:20:24.595: INFO: Got endpoints: latency-svc-crr84 [751.857577ms] Jun 7 21:20:24.622: INFO: Created: latency-svc-sj5mn Jun 7 21:20:24.639: INFO: Got endpoints: latency-svc-sj5mn [766.837423ms] Jun 7 21:20:24.677: INFO: Created: latency-svc-2jf9x Jun 7 21:20:24.685: INFO: Got endpoints: latency-svc-2jf9x [732.607234ms] Jun 7 21:20:24.706: INFO: Created: latency-svc-2pn92 Jun 7 21:20:24.715: INFO: Got endpoints: latency-svc-2pn92 [735.842356ms] Jun 7 21:20:24.743: INFO: Created: latency-svc-kgtj2 Jun 7 21:20:24.758: INFO: Got endpoints: latency-svc-kgtj2 [735.84824ms] Jun 7 21:20:24.814: INFO: Created: latency-svc-tkjw7 Jun 7 21:20:24.824: INFO: Got endpoints: latency-svc-tkjw7 [739.608866ms] Jun 7 21:20:24.982: INFO: Created: latency-svc-rdh6q Jun 7 21:20:24.992: INFO: Got endpoints: latency-svc-rdh6q [868.201301ms] Jun 7 21:20:25.018: INFO: Created: latency-svc-l4tr6 Jun 7 21:20:25.246: INFO: Got endpoints: latency-svc-l4tr6 [1.074099817s] Jun 7 21:20:25.275: INFO: Created: latency-svc-8nd6z Jun 7 21:20:25.293: INFO: Got endpoints: latency-svc-8nd6z [1.041983026s] Jun 7 21:20:25.324: INFO: Created: latency-svc-jm76f Jun 7 21:20:25.341: INFO: Got endpoints: latency-svc-jm76f [1.059104741s] Jun 7 21:20:25.384: INFO: Created: latency-svc-6qdhr Jun 7 21:20:25.388: INFO: Got endpoints: latency-svc-6qdhr [1.070467742s] Jun 7 21:20:25.426: INFO: Created: latency-svc-tbb8n Jun 7 21:20:25.455: INFO: Got endpoints: latency-svc-tbb8n [1.069486089s] Jun 7 21:20:25.455: INFO: Latencies: [69.479959ms 194.474595ms 227.300166ms 269.010874ms 359.37089ms 418.990318ms 496.492251ms 563.484212ms 649.707198ms 704.953877ms 707.585337ms 716.364192ms 732.607234ms 734.229125ms 735.842356ms 735.84824ms 736.468511ms 739.608866ms 751.394919ms 751.857577ms 758.653286ms 760.425017ms 763.771295ms 766.837423ms 767.186799ms 770.404691ms 772.137029ms 778.127651ms 781.485939ms 790.497531ms 791.096436ms 792.632004ms 793.853688ms 799.437593ms 801.912687ms 801.950534ms 802.976858ms 804.115572ms 804.23312ms 804.254044ms 805.518679ms 807.623604ms 808.007598ms 810.717242ms 810.916611ms 816.311747ms 816.523324ms 819.840542ms 820.058499ms 824.805118ms 825.374954ms 825.553356ms 825.699235ms 831.004392ms 831.411446ms 840.872231ms 849.22861ms 851.052694ms 851.549335ms 852.615426ms 852.667059ms 858.475611ms 859.849664ms 860.486054ms 861.703735ms 861.744983ms 864.930132ms 866.637023ms 866.945486ms 867.538942ms 867.657264ms 868.201301ms 869.793264ms 871.860822ms 875.114378ms 879.701248ms 881.670323ms 883.223311ms 884.100725ms 885.87818ms 891.191225ms 891.917459ms 892.590314ms 896.613159ms 897.468696ms 897.470093ms 897.953946ms 898.626082ms 900.213634ms 901.830442ms 901.958252ms 902.62032ms 903.787942ms 906.069902ms 908.209884ms 909.320891ms 909.754264ms 912.344037ms 912.834304ms 918.456938ms 920.059986ms 920.939366ms 921.102238ms 921.168362ms 921.883711ms 922.951599ms 924.749531ms 926.082671ms 926.950945ms 927.131238ms 927.781526ms 936.317611ms 936.787325ms 937.938396ms 938.260213ms 938.824272ms 945.16211ms 945.231365ms 945.690925ms 945.706804ms 948.500113ms 948.764671ms 949.155852ms 949.30653ms 950.326126ms 951.18486ms 951.739267ms 952.325331ms 952.509445ms 953.468126ms 962.56593ms 964.163352ms 967.802888ms 970.628586ms 971.866762ms 974.131603ms 975.115062ms 987.036217ms 988.659185ms 989.148025ms 998.988489ms 1.003534651s 1.008236068s 1.011580494s 1.032873145s 1.03456407s 1.040064226s 1.041983026s 1.051847003s 1.053984647s 1.059104741s 1.064396835s 1.069486089s 1.070467742s 1.072288461s 1.074099817s 1.076741973s 1.087010478s 1.101207117s 1.112672403s 1.141852008s 1.155740642s 1.275083276s 1.287265802s 1.332620012s 1.335005973s 1.336017699s 1.341810327s 1.344746399s 1.346895939s 1.353406491s 1.354977769s 1.357132862s 1.36318653s 1.407193931s 1.419411818s 1.430901913s 1.455664125s 1.457084136s 1.52130404s 1.549673852s 1.622202819s 1.628810881s 1.684361458s 1.694567317s 1.700804103s 1.708954581s 1.742975701s 1.75079879s 1.817773429s 1.819578724s 1.825495677s 1.839335054s 1.856778219s 1.85730183s 1.880292862s 1.881568883s 1.887666091s 1.895615595s 1.916644601s] Jun 7 21:20:25.455: INFO: 50 %ile: 920.059986ms Jun 7 21:20:25.455: INFO: 90 %ile: 1.549673852s Jun 7 21:20:25.455: INFO: 99 %ile: 1.895615595s Jun 7 21:20:25.455: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:20:25.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7252" for this suite. • [SLOW TEST:18.232 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":34,"skipped":640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:20:25.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:20:25.554: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 7 21:20:28.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2516 create -f -' Jun 7 21:20:31.985: INFO: stderr: "" Jun 7 21:20:31.985: INFO: stdout: "e2e-test-crd-publish-openapi-7229-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 7 21:20:31.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2516 delete e2e-test-crd-publish-openapi-7229-crds test-cr' Jun 7 21:20:32.173: INFO: stderr: "" Jun 7 21:20:32.173: INFO: stdout: "e2e-test-crd-publish-openapi-7229-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 7 21:20:32.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2516 apply -f -' Jun 7 21:20:32.514: INFO: stderr: "" Jun 7 21:20:32.514: INFO: stdout: "e2e-test-crd-publish-openapi-7229-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 7 21:20:32.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2516 delete e2e-test-crd-publish-openapi-7229-crds test-cr' Jun 7 21:20:32.715: INFO: stderr: "" Jun 7 21:20:32.715: INFO: stdout: "e2e-test-crd-publish-openapi-7229-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 7 21:20:32.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7229-crds' Jun 7 21:20:33.018: INFO: stderr: "" Jun 7 21:20:33.018: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7229-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:20:35.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2516" for this suite. • [SLOW TEST:10.532 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":35,"skipped":668,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:20:36.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-7fa591c5-28b1-4b0f-9692-ec5aacbc08ee STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:20:42.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2007" for this suite. • [SLOW TEST:6.481 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":672,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:20:42.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:20:42.624: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3" in namespace "projected-3080" to be "success or failure" Jun 7 21:20:42.659: INFO: Pod "downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3": Phase="Pending", Reason="", readiness=false. Elapsed: 35.768469ms Jun 7 21:20:44.671: INFO: Pod "downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047573058s Jun 7 21:20:46.674: INFO: Pod "downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050344277s STEP: Saw pod success Jun 7 21:20:46.674: INFO: Pod "downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3" satisfied condition "success or failure" Jun 7 21:20:46.676: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3 container client-container: STEP: delete the pod Jun 7 21:20:46.854: INFO: Waiting for pod downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3 to disappear Jun 7 21:20:46.989: INFO: Pod downwardapi-volume-b78ebe9c-27ac-43df-819d-c14be3a729e3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:20:46.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3080" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":681,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:20:47.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jun 7 21:20:47.335: INFO: Waiting up to 5m0s for pod "client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9" in namespace "containers-1229" to be "success or failure" Jun 7 21:20:47.371: INFO: Pod "client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.153479ms Jun 7 21:20:49.394: INFO: Pod "client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05933937s Jun 7 21:20:51.486: INFO: Pod "client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150895093s Jun 7 21:20:53.542: INFO: Pod "client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207323538s STEP: Saw pod success Jun 7 21:20:53.542: INFO: Pod "client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9" satisfied condition "success or failure" Jun 7 21:20:53.595: INFO: Trying to get logs from node jerma-worker pod client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9 container test-container: STEP: delete the pod Jun 7 21:20:53.667: INFO: Waiting for pod client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9 to disappear Jun 7 21:20:53.677: INFO: Pod client-containers-ee6fe7f2-fb06-4c98-b23c-d1d1843b6ca9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:20:53.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1229" for this suite. • [SLOW TEST:6.579 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:20:53.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:20:54.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:20:56.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 21:20:58.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161654, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:21:01.960: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:21:01.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9605-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:21:03.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6378" for this suite. STEP: Destroying namespace "webhook-6378-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.534 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":39,"skipped":708,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:21:03.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 7 21:21:08.177: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 7 21:21:23.282: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:21:23.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1587" for this suite. • [SLOW TEST:20.021 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":40,"skipped":715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:21:23.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 7 21:21:23.374: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:21:30.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6127" for this suite. • [SLOW TEST:7.412 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":41,"skipped":745,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:21:30.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 7 21:21:30.801: INFO: Waiting up to 5m0s for pod "pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f" in namespace "emptydir-9895" to be "success or failure" Jun 7 21:21:30.819: INFO: Pod "pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.962719ms Jun 7 21:21:32.823: INFO: Pod "pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02191103s Jun 7 21:21:34.827: INFO: Pod "pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02620843s STEP: Saw pod success Jun 7 21:21:34.827: INFO: Pod "pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f" satisfied condition "success or failure" Jun 7 21:21:34.830: INFO: Trying to get logs from node jerma-worker2 pod pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f container test-container: STEP: delete the pod Jun 7 21:21:34.860: INFO: Waiting for pod pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f to disappear Jun 7 21:21:34.872: INFO: Pod pod-c3b0b7e2-9603-4f0b-b5c4-0b8c667f973f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:21:34.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9895" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":747,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:21:34.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-36d1ab3d-2260-4006-b38a-06324bed17c8 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-36d1ab3d-2260-4006-b38a-06324bed17c8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:21:43.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3662" for this suite. • [SLOW TEST:8.451 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":750,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:21:43.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:21:43.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9954" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":44,"skipped":758,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:21:43.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 7 21:21:43.494: INFO: Waiting up to 5m0s for pod "pod-01181d30-8423-46bf-8052-21e0b3a01a93" in namespace "emptydir-3942" to be "success or failure" Jun 7 21:21:43.520: INFO: Pod "pod-01181d30-8423-46bf-8052-21e0b3a01a93": Phase="Pending", Reason="", readiness=false. Elapsed: 26.065455ms Jun 7 21:21:45.525: INFO: Pod "pod-01181d30-8423-46bf-8052-21e0b3a01a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031100578s Jun 7 21:21:47.529: INFO: Pod "pod-01181d30-8423-46bf-8052-21e0b3a01a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035030404s STEP: Saw pod success Jun 7 21:21:47.529: INFO: Pod "pod-01181d30-8423-46bf-8052-21e0b3a01a93" satisfied condition "success or failure" Jun 7 21:21:47.532: INFO: Trying to get logs from node jerma-worker2 pod pod-01181d30-8423-46bf-8052-21e0b3a01a93 container test-container: STEP: delete the pod Jun 7 21:21:47.584: INFO: Waiting for pod pod-01181d30-8423-46bf-8052-21e0b3a01a93 to disappear Jun 7 21:21:47.588: INFO: Pod pod-01181d30-8423-46bf-8052-21e0b3a01a93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:21:47.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3942" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":772,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:21:47.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3279 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-3279 Jun 7 21:21:47.678: INFO: Found 0 stateful pods, waiting for 1 Jun 7 21:21:57.683: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 7 21:21:57.711: INFO: Deleting all statefulset in ns statefulset-3279 Jun 7 21:21:57.717: INFO: Scaling statefulset ss to 0 Jun 7 21:22:17.807: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:22:17.810: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:22:17.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3279" for this suite. • [SLOW TEST:30.236 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":46,"skipped":787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:22:17.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:22:17.897: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:22:18.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3605" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":47,"skipped":810,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:22:18.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jun 7 21:22:23.559: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8106 pod-service-account-085ff271-0cda-4dea-ab27-b04030a1208a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 7 21:22:23.786: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8106 pod-service-account-085ff271-0cda-4dea-ab27-b04030a1208a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 7 21:22:24.001: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8106 pod-service-account-085ff271-0cda-4dea-ab27-b04030a1208a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:22:24.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8106" for this suite. • [SLOW TEST:5.290 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":48,"skipped":811,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:22:24.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:22:24.352: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030" in namespace "downward-api-6986" to be "success or failure" Jun 7 21:22:24.354: INFO: Pod "downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434798ms Jun 7 21:22:26.358: INFO: Pod "downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006598488s Jun 7 21:22:28.362: INFO: Pod "downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010821671s STEP: Saw pod success Jun 7 21:22:28.362: INFO: Pod "downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030" satisfied condition "success or failure" Jun 7 21:22:28.365: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030 container client-container: STEP: delete the pod Jun 7 21:22:28.434: INFO: Waiting for pod downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030 to disappear Jun 7 21:22:28.469: INFO: Pod downwardapi-volume-27e50829-b827-44bc-a52f-171d59bf2030 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:22:28.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6986" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":821,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:22:28.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:02.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9614" for this suite. • [SLOW TEST:33.604 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:02.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b17b9b5e-ded4-49c1-a30a-d701ebfa091a STEP: Creating a pod to test consume configMaps Jun 7 21:23:02.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd" in namespace "configmap-520" to be "success or failure" Jun 7 21:23:02.198: INFO: Pod "pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23738ms Jun 7 21:23:04.202: INFO: Pod "pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007778868s Jun 7 21:23:06.207: INFO: Pod "pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012120754s STEP: Saw pod success Jun 7 21:23:06.207: INFO: Pod "pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd" satisfied condition "success or failure" Jun 7 21:23:06.210: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd container configmap-volume-test: STEP: delete the pod Jun 7 21:23:06.325: INFO: Waiting for pod pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd to disappear Jun 7 21:23:06.342: INFO: Pod pod-configmaps-4035d66e-4647-4798-aaa3-f10ee4da8dcd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:06.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-520" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":872,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:06.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 7 21:23:06.524: INFO: Waiting up to 5m0s for pod "downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d" in namespace "downward-api-7548" to be "success or failure" Jun 7 21:23:06.608: INFO: Pod "downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d": Phase="Pending", Reason="", readiness=false. Elapsed: 83.634061ms Jun 7 21:23:08.612: INFO: Pod "downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088167918s Jun 7 21:23:10.618: INFO: Pod "downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093513927s STEP: Saw pod success Jun 7 21:23:10.618: INFO: Pod "downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d" satisfied condition "success or failure" Jun 7 21:23:10.621: INFO: Trying to get logs from node jerma-worker2 pod downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d container dapi-container: STEP: delete the pod Jun 7 21:23:10.647: INFO: Waiting for pod downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d to disappear Jun 7 21:23:10.698: INFO: Pod downward-api-2f30d330-007d-4c26-9a9d-7b8805ef704d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:10.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7548" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":876,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:10.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:10.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5690" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":53,"skipped":883,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:10.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 7 21:23:17.453: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2f18eaaf-006f-432c-ac3a-655f2738584b" Jun 7 21:23:17.453: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2f18eaaf-006f-432c-ac3a-655f2738584b" in namespace "pods-2534" to be "terminated due to deadline exceeded" Jun 7 21:23:17.466: INFO: Pod "pod-update-activedeadlineseconds-2f18eaaf-006f-432c-ac3a-655f2738584b": Phase="Running", Reason="", readiness=true. Elapsed: 13.332926ms Jun 7 21:23:19.470: INFO: Pod "pod-update-activedeadlineseconds-2f18eaaf-006f-432c-ac3a-655f2738584b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.01709451s Jun 7 21:23:19.470: INFO: Pod "pod-update-activedeadlineseconds-2f18eaaf-006f-432c-ac3a-655f2738584b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:19.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2534" for this suite. • [SLOW TEST:8.680 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":900,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:19.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jun 7 21:23:19.524: INFO: namespace kubectl-8403 Jun 7 21:23:19.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8403' Jun 7 21:23:19.833: INFO: stderr: "" Jun 7 21:23:19.833: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 7 21:23:20.838: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:23:20.838: INFO: Found 0 / 1 Jun 7 21:23:21.837: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:23:21.837: INFO: Found 0 / 1 Jun 7 21:23:22.837: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:23:22.837: INFO: Found 0 / 1 Jun 7 21:23:23.848: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:23:23.848: INFO: Found 1 / 1 Jun 7 21:23:23.848: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 7 21:23:23.851: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:23:23.851: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 7 21:23:23.851: INFO: wait on agnhost-master startup in kubectl-8403 Jun 7 21:23:23.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-w6wg7 agnhost-master --namespace=kubectl-8403' Jun 7 21:23:23.974: INFO: stderr: "" Jun 7 21:23:23.974: INFO: stdout: "Paused\n" STEP: exposing RC Jun 7 21:23:23.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8403' Jun 7 21:23:24.140: INFO: stderr: "" Jun 7 21:23:24.140: INFO: stdout: "service/rm2 exposed\n" Jun 7 21:23:24.149: INFO: Service rm2 in namespace kubectl-8403 found. STEP: exposing service Jun 7 21:23:26.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8403' Jun 7 21:23:26.317: INFO: stderr: "" Jun 7 21:23:26.317: INFO: stdout: "service/rm3 exposed\n" Jun 7 21:23:26.323: INFO: Service rm3 in namespace kubectl-8403 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:28.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8403" for this suite. • [SLOW TEST:8.862 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":55,"skipped":909,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:28.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 21:23:32.446: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:32.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2415" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:32.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:23:33.480: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:23:35.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 21:23:37.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:23:40.523: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:40.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7588" for this suite. STEP: Destroying namespace "webhook-7588-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.239 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":57,"skipped":946,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:40.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:23:40.866: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477" in namespace "downward-api-7421" to be "success or failure" Jun 7 21:23:40.875: INFO: Pod "downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477": Phase="Pending", Reason="", readiness=false. Elapsed: 8.423107ms Jun 7 21:23:42.899: INFO: Pod "downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032769802s Jun 7 21:23:44.903: INFO: Pod "downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036365152s STEP: Saw pod success Jun 7 21:23:44.903: INFO: Pod "downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477" satisfied condition "success or failure" Jun 7 21:23:44.905: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477 container client-container: STEP: delete the pod Jun 7 21:23:44.945: INFO: Waiting for pod downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477 to disappear Jun 7 21:23:44.947: INFO: Pod downwardapi-volume-abd7ba6d-de4e-4636-9096-042b2c8d5477 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:44.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7421" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:44.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jun 7 21:23:45.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4159 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 7 21:23:48.266: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0607 21:23:48.194093 938 log.go:172] (0xc000af29a0) (0xc0009ce1e0) Create stream\nI0607 21:23:48.194151 938 log.go:172] (0xc000af29a0) (0xc0009ce1e0) Stream added, broadcasting: 1\nI0607 21:23:48.196677 938 log.go:172] (0xc000af29a0) Reply frame received for 1\nI0607 21:23:48.196736 938 log.go:172] (0xc000af29a0) (0xc000697a40) Create stream\nI0607 21:23:48.196753 938 log.go:172] (0xc000af29a0) (0xc000697a40) Stream added, broadcasting: 3\nI0607 21:23:48.197970 938 log.go:172] (0xc000af29a0) Reply frame received for 3\nI0607 21:23:48.198017 938 log.go:172] (0xc000af29a0) (0xc000697ae0) Create stream\nI0607 21:23:48.198030 938 log.go:172] (0xc000af29a0) (0xc000697ae0) Stream added, broadcasting: 5\nI0607 21:23:48.199093 938 log.go:172] (0xc000af29a0) Reply frame received for 5\nI0607 21:23:48.199132 938 log.go:172] (0xc000af29a0) (0xc0009ce280) Create stream\nI0607 21:23:48.199147 938 log.go:172] (0xc000af29a0) (0xc0009ce280) Stream added, broadcasting: 7\nI0607 21:23:48.200202 938 log.go:172] (0xc000af29a0) Reply frame received for 7\nI0607 21:23:48.200399 938 log.go:172] (0xc000697a40) (3) Writing data frame\nI0607 21:23:48.200637 938 log.go:172] (0xc000697a40) (3) Writing data frame\nI0607 21:23:48.201569 938 log.go:172] (0xc000af29a0) Data frame received for 5\nI0607 21:23:48.201597 938 log.go:172] (0xc000697ae0) (5) Data frame handling\nI0607 21:23:48.201609 938 log.go:172] (0xc000697ae0) (5) Data frame sent\nI0607 21:23:48.202965 938 log.go:172] (0xc000af29a0) Data frame received for 5\nI0607 21:23:48.202983 938 log.go:172] (0xc000697ae0) (5) Data frame handling\nI0607 21:23:48.202998 938 log.go:172] (0xc000697ae0) (5) Data frame sent\nI0607 21:23:48.239915 938 log.go:172] (0xc000af29a0) Data frame received for 5\nI0607 21:23:48.239959 938 log.go:172] (0xc000697ae0) (5) Data frame handling\nI0607 21:23:48.240180 938 log.go:172] (0xc000af29a0) Data frame received for 7\nI0607 21:23:48.240225 938 log.go:172] (0xc0009ce280) (7) Data frame handling\nI0607 21:23:48.240658 938 log.go:172] (0xc000af29a0) Data frame received for 1\nI0607 21:23:48.240695 938 log.go:172] (0xc000af29a0) (0xc000697a40) Stream removed, broadcasting: 3\nI0607 21:23:48.240730 938 log.go:172] (0xc0009ce1e0) (1) Data frame handling\nI0607 21:23:48.240879 938 log.go:172] (0xc0009ce1e0) (1) Data frame sent\nI0607 21:23:48.240904 938 log.go:172] (0xc000af29a0) (0xc0009ce1e0) Stream removed, broadcasting: 1\nI0607 21:23:48.240930 938 log.go:172] (0xc000af29a0) Go away received\nI0607 21:23:48.241736 938 log.go:172] (0xc000af29a0) (0xc0009ce1e0) Stream removed, broadcasting: 1\nI0607 21:23:48.241781 938 log.go:172] (0xc000af29a0) (0xc000697a40) Stream removed, broadcasting: 3\nI0607 21:23:48.241801 938 log.go:172] (0xc000af29a0) (0xc000697ae0) Stream removed, broadcasting: 5\nI0607 21:23:48.241817 938 log.go:172] (0xc000af29a0) (0xc0009ce280) Stream removed, broadcasting: 7\n" Jun 7 21:23:48.266: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:50.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4159" for this suite. • [SLOW TEST:5.326 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":59,"skipped":1005,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:50.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 7 21:23:50.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3364' Jun 7 21:23:50.467: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 21:23:50.467: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Jun 7 21:23:50.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3364' Jun 7 21:23:50.610: INFO: stderr: "" Jun 7 21:23:50.610: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:23:50.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3364" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":60,"skipped":1013,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:23:50.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:23:51.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:23:53.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 21:23:55.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161831, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:23:58.974: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 7 21:24:03.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-7774 to-be-attached-pod -i -c=container1' Jun 7 21:24:03.144: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:03.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7774" for this suite. STEP: Destroying namespace "webhook-7774-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.663 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":61,"skipped":1024,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:03.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-19784a69-0e80-4225-8d0b-ea5d734c182e STEP: Creating a pod to test consume secrets Jun 7 21:24:03.375: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52" in namespace "projected-6289" to be "success or failure" Jun 7 21:24:03.412: INFO: Pod "pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52": Phase="Pending", Reason="", readiness=false. Elapsed: 36.802388ms Jun 7 21:24:05.416: INFO: Pod "pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040773431s Jun 7 21:24:07.420: INFO: Pod "pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52": Phase="Running", Reason="", readiness=true. Elapsed: 4.044657067s Jun 7 21:24:09.423: INFO: Pod "pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048284052s STEP: Saw pod success Jun 7 21:24:09.423: INFO: Pod "pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52" satisfied condition "success or failure" Jun 7 21:24:09.427: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52 container projected-secret-volume-test: STEP: delete the pod Jun 7 21:24:09.450: INFO: Waiting for pod pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52 to disappear Jun 7 21:24:09.517: INFO: Pod pod-projected-secrets-921ec73c-9971-4e89-9dc9-105edd47fa52 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:09.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6289" for this suite. • [SLOW TEST:6.323 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1024,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:09.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 7 21:24:09.811: INFO: Waiting up to 5m0s for pod "downward-api-4472348d-79e7-4d52-a179-f63da13517c7" in namespace "downward-api-8404" to be "success or failure" Jun 7 21:24:09.828: INFO: Pod "downward-api-4472348d-79e7-4d52-a179-f63da13517c7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.425398ms Jun 7 21:24:11.833: INFO: Pod "downward-api-4472348d-79e7-4d52-a179-f63da13517c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022583388s Jun 7 21:24:13.838: INFO: Pod "downward-api-4472348d-79e7-4d52-a179-f63da13517c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027193254s STEP: Saw pod success Jun 7 21:24:13.838: INFO: Pod "downward-api-4472348d-79e7-4d52-a179-f63da13517c7" satisfied condition "success or failure" Jun 7 21:24:13.841: INFO: Trying to get logs from node jerma-worker2 pod downward-api-4472348d-79e7-4d52-a179-f63da13517c7 container dapi-container: STEP: delete the pod Jun 7 21:24:13.910: INFO: Waiting for pod downward-api-4472348d-79e7-4d52-a179-f63da13517c7 to disappear Jun 7 21:24:13.919: INFO: Pod downward-api-4472348d-79e7-4d52-a179-f63da13517c7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8404" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1028,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:13.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:24:14.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:24:16.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:24:19.783: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:20.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6928" for this suite. STEP: Destroying namespace "webhook-6928-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.254 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":64,"skipped":1039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:20.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 7 21:24:20.651: INFO: Waiting up to 5m0s for pod "pod-7686457e-0c13-405b-a74e-1f262d4468ff" in namespace "emptydir-7728" to be "success or failure" Jun 7 21:24:20.687: INFO: Pod "pod-7686457e-0c13-405b-a74e-1f262d4468ff": Phase="Pending", Reason="", readiness=false. Elapsed: 35.829186ms Jun 7 21:24:22.691: INFO: Pod "pod-7686457e-0c13-405b-a74e-1f262d4468ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040159496s Jun 7 21:24:24.695: INFO: Pod "pod-7686457e-0c13-405b-a74e-1f262d4468ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044168094s STEP: Saw pod success Jun 7 21:24:24.695: INFO: Pod "pod-7686457e-0c13-405b-a74e-1f262d4468ff" satisfied condition "success or failure" Jun 7 21:24:24.698: INFO: Trying to get logs from node jerma-worker2 pod pod-7686457e-0c13-405b-a74e-1f262d4468ff container test-container: STEP: delete the pod Jun 7 21:24:24.717: INFO: Waiting for pod pod-7686457e-0c13-405b-a74e-1f262d4468ff to disappear Jun 7 21:24:24.740: INFO: Pod pod-7686457e-0c13-405b-a74e-1f262d4468ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:24.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7728" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1074,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:24.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-af1f39cc-7b7c-43ef-b3af-a5086a6c8efa STEP: Creating a pod to test consume configMaps Jun 7 21:24:24.836: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b" in namespace "projected-914" to be "success or failure" Jun 7 21:24:24.840: INFO: Pod "pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359613ms Jun 7 21:24:26.845: INFO: Pod "pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008903482s Jun 7 21:24:28.849: INFO: Pod "pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01294088s STEP: Saw pod success Jun 7 21:24:28.849: INFO: Pod "pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b" satisfied condition "success or failure" Jun 7 21:24:28.852: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b container projected-configmap-volume-test: STEP: delete the pod Jun 7 21:24:28.891: INFO: Waiting for pod pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b to disappear Jun 7 21:24:28.901: INFO: Pod pod-projected-configmaps-54c14de7-355c-4a66-9991-bb4945cae26b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:28.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-914" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1091,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:28.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:24:29.975: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:24:31.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161869, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161869, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161870, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161869, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:24:35.092: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:35.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4462" for this suite. STEP: Destroying namespace "webhook-4462-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.544 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":67,"skipped":1093,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:35.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-d8a79bdf-e1b1-4b9d-9fc6-fe6b0a0577e7 STEP: Creating a pod to test consume secrets Jun 7 21:24:35.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28" in namespace "projected-2464" to be "success or failure" Jun 7 21:24:35.562: INFO: Pod "pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28": Phase="Pending", Reason="", readiness=false. Elapsed: 46.234961ms Jun 7 21:24:37.603: INFO: Pod "pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088067044s Jun 7 21:24:39.607: INFO: Pod "pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091508588s STEP: Saw pod success Jun 7 21:24:39.607: INFO: Pod "pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28" satisfied condition "success or failure" Jun 7 21:24:39.611: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28 container projected-secret-volume-test: STEP: delete the pod Jun 7 21:24:39.674: INFO: Waiting for pod pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28 to disappear Jun 7 21:24:39.686: INFO: Pod pod-projected-secrets-3da9ced0-b86c-459b-865b-97a76e6b7b28 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:39.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2464" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1105,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:39.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 7 21:24:47.795: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 21:24:47.800: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 21:24:49.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 21:24:49.916: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 21:24:51.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 21:24:51.831: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 21:24:53.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 21:24:53.819: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 21:24:55.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 21:24:55.805: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 21:24:57.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 21:24:57.805: INFO: Pod pod-with-poststart-http-hook still exists Jun 7 21:24:59.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 7 21:24:59.804: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:24:59.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8818" for this suite. • [SLOW TEST:20.101 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1114,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:24:59.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 7 21:24:59.928: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 21:24:59.962: INFO: Waiting for terminating namespaces to be deleted... Jun 7 21:24:59.965: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 7 21:24:59.971: INFO: pod-handle-http-request from container-lifecycle-hook-8818 started at 2020-06-07 21:24:39 +0000 UTC (1 container statuses recorded) Jun 7 21:24:59.971: INFO: Container pod-handle-http-request ready: true, restart count 0 Jun 7 21:24:59.971: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:24:59.971: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 21:24:59.971: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:24:59.971: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 21:24:59.971: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 7 21:24:59.976: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 7 21:24:59.976: INFO: Container kube-hunter ready: false, restart count 0 Jun 7 21:24:59.976: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 7 21:24:59.976: INFO: Container kube-bench ready: false, restart count 0 Jun 7 21:24:59.976: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:24:59.976: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 21:24:59.976: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:24:59.976: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Jun 7 21:25:00.059: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node jerma-worker Jun 7 21:25:00.059: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Jun 7 21:25:00.059: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Jun 7 21:25:00.059: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Jun 7 21:25:00.059: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jun 7 21:25:00.059: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Jun 7 21:25:00.066: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3d3e6667-1a51-4700-9f83-c45f01196e36.16166006aaa949b6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9205/filler-pod-3d3e6667-1a51-4700-9f83-c45f01196e36 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3d3e6667-1a51-4700-9f83-c45f01196e36.16166006f8a46529], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3d3e6667-1a51-4700-9f83-c45f01196e36.16166007660ba8e4], Reason = [Created], Message = [Created container filler-pod-3d3e6667-1a51-4700-9f83-c45f01196e36] STEP: Considering event: Type = [Normal], Name = [filler-pod-3d3e6667-1a51-4700-9f83-c45f01196e36.161660077c3fbf2d], Reason = [Started], Message = [Started container filler-pod-3d3e6667-1a51-4700-9f83-c45f01196e36] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4683189-c2ad-4c1e-9660-685f5b0c48a1.16166006ac47c6f6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9205/filler-pod-d4683189-c2ad-4c1e-9660-685f5b0c48a1 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4683189-c2ad-4c1e-9660-685f5b0c48a1.16166007342a61d9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4683189-c2ad-4c1e-9660-685f5b0c48a1.16166007808d2ec7], Reason = [Created], Message = [Created container filler-pod-d4683189-c2ad-4c1e-9660-685f5b0c48a1] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4683189-c2ad-4c1e-9660-685f5b0c48a1.161660079215dd2d], Reason = [Started], Message = [Started container filler-pod-d4683189-c2ad-4c1e-9660-685f5b0c48a1] STEP: Considering event: Type = [Warning], Name = [additional-pod.1616600812fe3576], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:07.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9205" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.455 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":70,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:07.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:25:07.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7" in namespace "projected-3490" to be "success or failure" Jun 7 21:25:07.376: INFO: Pod "downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.282526ms Jun 7 21:25:09.381: INFO: Pod "downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019457554s Jun 7 21:25:11.386: INFO: Pod "downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024014464s STEP: Saw pod success Jun 7 21:25:11.386: INFO: Pod "downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7" satisfied condition "success or failure" Jun 7 21:25:11.389: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7 container client-container: STEP: delete the pod Jun 7 21:25:11.420: INFO: Waiting for pod downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7 to disappear Jun 7 21:25:11.430: INFO: Pod downwardapi-volume-660855f7-6136-4ad7-b2a1-51d5ecb482b7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:11.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3490" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1136,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:11.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 7 21:25:11.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8862' Jun 7 21:25:11.633: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 21:25:11.633: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jun 7 21:25:11.652: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-znmsz] Jun 7 21:25:11.652: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-znmsz" in namespace "kubectl-8862" to be "running and ready" Jun 7 21:25:11.683: INFO: Pod "e2e-test-httpd-rc-znmsz": Phase="Pending", Reason="", readiness=false. Elapsed: 30.919112ms Jun 7 21:25:13.687: INFO: Pod "e2e-test-httpd-rc-znmsz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034546833s Jun 7 21:25:15.692: INFO: Pod "e2e-test-httpd-rc-znmsz": Phase="Running", Reason="", readiness=true. Elapsed: 4.039387795s Jun 7 21:25:15.692: INFO: Pod "e2e-test-httpd-rc-znmsz" satisfied condition "running and ready" Jun 7 21:25:15.692: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-znmsz] Jun 7 21:25:15.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8862' Jun 7 21:25:15.817: INFO: stderr: "" Jun 7 21:25:15.818: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.126. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.126. Set the 'ServerName' directive globally to suppress this message\n[Sun Jun 07 21:25:14.539352 2020] [mpm_event:notice] [pid 1:tid 139759156419432] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Jun 07 21:25:14.539412 2020] [core:notice] [pid 1:tid 139759156419432] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Jun 7 21:25:15.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8862' Jun 7 21:25:15.920: INFO: stderr: "" Jun 7 21:25:15.920: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:15.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8862" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":72,"skipped":1138,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:15.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:33.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8396" for this suite. • [SLOW TEST:17.182 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":73,"skipped":1153,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:33.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 7 21:25:33.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6571 /api/v1/namespaces/watch-6571/configmaps/e2e-watch-test-resource-version 1a9fcff9-5a53-46b1-bfe9-b619f9880ee4 22527010 0 2020-06-07 21:25:33 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 21:25:33.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6571 /api/v1/namespaces/watch-6571/configmaps/e2e-watch-test-resource-version 1a9fcff9-5a53-46b1-bfe9-b619f9880ee4 22527011 0 2020-06-07 21:25:33 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:33.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6571" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":74,"skipped":1154,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:33.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:25:33.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:25:35.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161933, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161933, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161933, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727161933, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:25:38.857: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:39.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3712" for this suite. STEP: Destroying namespace "webhook-3712-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.852 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":75,"skipped":1175,"failed":0} [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:39.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-1dd4d16a-d17e-4d00-9759-f1a1849bcd70 STEP: Creating secret with name s-test-opt-upd-b7f9b238-27c5-4cf0-89b8-f4920c950a0b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1dd4d16a-d17e-4d00-9759-f1a1849bcd70 STEP: Updating secret s-test-opt-upd-b7f9b238-27c5-4cf0-89b8-f4920c950a0b STEP: Creating secret with name s-test-opt-create-b134517d-563c-4ccc-84f0-b0304a9211e3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:47.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4454" for this suite. • [SLOW TEST:8.221 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:47.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:25:47.408: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.379994ms) Jun 7 21:25:47.411: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.134493ms) Jun 7 21:25:47.415: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.501085ms) Jun 7 21:25:47.417: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.641718ms) Jun 7 21:25:47.420: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.547435ms) Jun 7 21:25:47.423: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.092534ms) Jun 7 21:25:47.450: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 26.610534ms) Jun 7 21:25:47.454: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.126702ms) Jun 7 21:25:47.458: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.697799ms) Jun 7 21:25:47.461: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.165191ms) Jun 7 21:25:47.464: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.922105ms) Jun 7 21:25:47.469: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.176985ms) Jun 7 21:25:47.473: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.825638ms) Jun 7 21:25:47.475: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.429324ms) Jun 7 21:25:47.478: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.808055ms) Jun 7 21:25:47.480: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.222412ms) Jun 7 21:25:47.483: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.408232ms) Jun 7 21:25:47.486: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.676492ms) Jun 7 21:25:47.488: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.45236ms) Jun 7 21:25:47.490: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.38374ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:47.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2478" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":77,"skipped":1193,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:47.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jun 7 21:25:47.547: INFO: Waiting up to 5m0s for pod "var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03" in namespace "var-expansion-3594" to be "success or failure" Jun 7 21:25:47.568: INFO: Pod "var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03": Phase="Pending", Reason="", readiness=false. Elapsed: 20.64273ms Jun 7 21:25:49.572: INFO: Pod "var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024883448s Jun 7 21:25:51.576: INFO: Pod "var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028655717s STEP: Saw pod success Jun 7 21:25:51.576: INFO: Pod "var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03" satisfied condition "success or failure" Jun 7 21:25:51.578: INFO: Trying to get logs from node jerma-worker pod var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03 container dapi-container: STEP: delete the pod Jun 7 21:25:51.666: INFO: Waiting for pod var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03 to disappear Jun 7 21:25:51.695: INFO: Pod var-expansion-3f8433bc-7863-48ce-8a62-374d0564ff03 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:25:51.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3594" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1208,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:25:51.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 7 21:25:51.799: INFO: >>> kubeConfig: /root/.kube/config Jun 7 21:25:54.715: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:05.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7136" for this suite. • [SLOW TEST:13.495 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":79,"skipped":1210,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:05.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0607 21:26:16.945827 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 21:26:16.945: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:16.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5291" for this suite. • [SLOW TEST:11.786 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":80,"skipped":1212,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:16.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:17.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-244" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1214,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:17.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-9df7e658-63b9-4bb9-9699-70a047f10ca2 STEP: Creating a pod to test consume secrets Jun 7 21:26:17.724: INFO: Waiting up to 5m0s for pod "pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac" in namespace "secrets-7900" to be "success or failure" Jun 7 21:26:17.863: INFO: Pod "pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac": Phase="Pending", Reason="", readiness=false. Elapsed: 139.083156ms Jun 7 21:26:19.867: INFO: Pod "pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14357299s Jun 7 21:26:21.872: INFO: Pod "pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148557944s STEP: Saw pod success Jun 7 21:26:21.872: INFO: Pod "pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac" satisfied condition "success or failure" Jun 7 21:26:21.876: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac container secret-volume-test: STEP: delete the pod Jun 7 21:26:21.954: INFO: Waiting for pod pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac to disappear Jun 7 21:26:22.044: INFO: Pod pod-secrets-58fff2e7-5dcf-4500-9fd9-43761f1c51ac no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:22.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7900" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1214,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:22.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:30.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9515" for this suite. • [SLOW TEST:8.312 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:30.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:26:30.522: INFO: Waiting up to 5m0s for pod "busybox-user-65534-2d007212-c4e2-44a8-ae7b-06781bd45e2d" in namespace "security-context-test-7395" to be "success or failure" Jun 7 21:26:30.563: INFO: Pod "busybox-user-65534-2d007212-c4e2-44a8-ae7b-06781bd45e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 41.112124ms Jun 7 21:26:32.568: INFO: Pod "busybox-user-65534-2d007212-c4e2-44a8-ae7b-06781bd45e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04562375s Jun 7 21:26:34.572: INFO: Pod "busybox-user-65534-2d007212-c4e2-44a8-ae7b-06781bd45e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049992383s Jun 7 21:26:36.577: INFO: Pod "busybox-user-65534-2d007212-c4e2-44a8-ae7b-06781bd45e2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054284382s Jun 7 21:26:36.577: INFO: Pod "busybox-user-65534-2d007212-c4e2-44a8-ae7b-06781bd45e2d" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:36.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7395" for this suite. • [SLOW TEST:6.194 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1274,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:36.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 7 21:26:36.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7206' Jun 7 21:26:36.839: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 21:26:36.840: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Jun 7 21:26:36.895: INFO: scanned /root for discovery docs: Jun 7 21:26:36.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7206' Jun 7 21:26:53.709: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 7 21:26:53.710: INFO: stdout: "Created e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d\nScaling up e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jun 7 21:26:53.710: INFO: stdout: "Created e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d\nScaling up e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jun 7 21:26:53.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7206' Jun 7 21:26:53.822: INFO: stderr: "" Jun 7 21:26:53.822: INFO: stdout: "e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d-dpkc8 " Jun 7 21:26:53.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d-dpkc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7206' Jun 7 21:26:53.921: INFO: stderr: "" Jun 7 21:26:53.921: INFO: stdout: "true" Jun 7 21:26:53.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d-dpkc8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7206' Jun 7 21:26:54.024: INFO: stderr: "" Jun 7 21:26:54.024: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jun 7 21:26:54.024: INFO: e2e-test-httpd-rc-ffa7a6ae84e1ff35580cef879fbeb42d-dpkc8 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Jun 7 21:26:54.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7206' Jun 7 21:26:54.146: INFO: stderr: "" Jun 7 21:26:54.146: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:54.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7206" for this suite. • [SLOW TEST:17.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":85,"skipped":1296,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:54.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 7 21:26:54.280: INFO: Waiting up to 5m0s for pod "pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea" in namespace "emptydir-3824" to be "success or failure" Jun 7 21:26:54.284: INFO: Pod "pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.658415ms Jun 7 21:26:56.306: INFO: Pod "pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026307537s Jun 7 21:26:58.311: INFO: Pod "pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030908936s STEP: Saw pod success Jun 7 21:26:58.311: INFO: Pod "pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea" satisfied condition "success or failure" Jun 7 21:26:58.315: INFO: Trying to get logs from node jerma-worker pod pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea container test-container: STEP: delete the pod Jun 7 21:26:58.536: INFO: Waiting for pod pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea to disappear Jun 7 21:26:58.554: INFO: Pod pod-98ed9d4e-ec46-4ada-9fbb-608c829172ea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:26:58.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3824" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1301,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:26:58.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 7 21:27:06.126: INFO: 0 pods remaining Jun 7 21:27:06.126: INFO: 0 pods has nil DeletionTimestamp Jun 7 21:27:06.126: INFO: Jun 7 21:27:07.687: INFO: 0 pods remaining Jun 7 21:27:07.687: INFO: 0 pods has nil DeletionTimestamp Jun 7 21:27:07.687: INFO: STEP: Gathering metrics W0607 21:27:08.872746 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 21:27:08.872: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:27:08.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6623" for this suite. • [SLOW TEST:10.319 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":87,"skipped":1304,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:27:08.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:27:09.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2926" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":88,"skipped":1321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:27:09.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:27:09.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-847' Jun 7 21:27:10.495: INFO: stderr: "" Jun 7 21:27:10.495: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jun 7 21:27:10.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-847' Jun 7 21:27:11.029: INFO: stderr: "" Jun 7 21:27:11.029: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 7 21:27:12.034: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:27:12.034: INFO: Found 0 / 1 Jun 7 21:27:13.034: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:27:13.034: INFO: Found 0 / 1 Jun 7 21:27:14.034: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:27:14.034: INFO: Found 1 / 1 Jun 7 21:27:14.034: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 7 21:27:14.089: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:27:14.089: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 7 21:27:14.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-ppl8m --namespace=kubectl-847' Jun 7 21:27:14.226: INFO: stderr: "" Jun 7 21:27:14.226: INFO: stdout: "Name: agnhost-master-ppl8m\nNamespace: kubectl-847\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Sun, 07 Jun 2020 21:27:10 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.239\nIPs:\n IP: 10.244.1.239\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://779f9bc62ec8050928863f76cd926c1b148c386d35352b29119e54f5541f1597\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 07 Jun 2020 21:27:12 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-2kx89 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-2kx89:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-2kx89\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-847/agnhost-master-ppl8m to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker Started container agnhost-master\n" Jun 7 21:27:14.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-847' Jun 7 21:27:14.559: INFO: stderr: "" Jun 7 21:27:14.560: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-847\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-ppl8m\n" Jun 7 21:27:14.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-847' Jun 7 21:27:14.894: INFO: stderr: "" Jun 7 21:27:14.895: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-847\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.111.213.55\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.239:6379\nSession Affinity: None\nEvents: \n" Jun 7 21:27:14.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Jun 7 21:27:15.074: INFO: stderr: "" Jun 7 21:27:15.074: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sun, 07 Jun 2020 21:27:10 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 07 Jun 2020 21:25:16 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 07 Jun 2020 21:25:16 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 07 Jun 2020 21:25:16 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 07 Jun 2020 21:25:16 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 84d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 84d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 84d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 84d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 7 21:27:15.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-847' Jun 7 21:27:15.366: INFO: stderr: "" Jun 7 21:27:15.367: INFO: stdout: "Name: kubectl-847\nLabels: e2e-framework=kubectl\n e2e-run=f94a6f7f-8e5a-4257-a60b-544b9a974868\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:27:15.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-847" for this suite. • [SLOW TEST:6.119 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":89,"skipped":1349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:27:15.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:27:33.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5237" for this suite. • [SLOW TEST:18.226 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":90,"skipped":1385,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:27:33.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-7720cf0c-3a1e-4cd4-b335-ed205b599f9b STEP: Creating a pod to test consume secrets Jun 7 21:27:33.720: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e" in namespace "projected-2505" to be "success or failure" Jun 7 21:27:33.723: INFO: Pod "pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866977ms Jun 7 21:27:35.727: INFO: Pod "pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007523334s Jun 7 21:27:37.732: INFO: Pod "pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012044568s STEP: Saw pod success Jun 7 21:27:37.732: INFO: Pod "pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e" satisfied condition "success or failure" Jun 7 21:27:37.735: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e container projected-secret-volume-test: STEP: delete the pod Jun 7 21:27:37.755: INFO: Waiting for pod pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e to disappear Jun 7 21:27:37.759: INFO: Pod pod-projected-secrets-4a57acc1-d004-4503-bf3d-78568f82b90e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:27:37.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2505" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1386,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:27:37.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jun 7 21:27:37.839: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:27:37.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4588" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":92,"skipped":1395,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:27:37.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8756 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jun 7 21:27:38.040: INFO: Found 0 stateful pods, waiting for 3 Jun 7 21:27:48.047: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:27:48.047: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:27:48.047: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 7 21:27:48.075: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 7 21:27:58.140: INFO: Updating stateful set ss2 Jun 7 21:27:58.147: INFO: Waiting for Pod statefulset-8756/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 7 21:28:08.162: INFO: Waiting for Pod statefulset-8756/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jun 7 21:28:18.580: INFO: Found 2 stateful pods, waiting for 3 Jun 7 21:28:28.586: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:28:28.586: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:28:28.586: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 7 21:28:28.611: INFO: Updating stateful set ss2 Jun 7 21:28:28.623: INFO: Waiting for Pod statefulset-8756/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 7 21:28:38.633: INFO: Waiting for Pod statefulset-8756/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 7 21:28:48.650: INFO: Updating stateful set ss2 Jun 7 21:28:48.704: INFO: Waiting for StatefulSet statefulset-8756/ss2 to complete update Jun 7 21:28:48.704: INFO: Waiting for Pod statefulset-8756/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 7 21:28:58.714: INFO: Deleting all statefulset in ns statefulset-8756 Jun 7 21:28:58.716: INFO: Scaling statefulset ss2 to 0 Jun 7 21:29:28.736: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:29:28.740: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:29:28.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8756" for this suite. • [SLOW TEST:110.803 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":93,"skipped":1409,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:29:28.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c0a55804-4474-40f3-82bf-ebd87d7ecbe7 STEP: Creating a pod to test consume secrets Jun 7 21:29:28.835: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28" in namespace "projected-7579" to be "success or failure" Jun 7 21:29:28.845: INFO: Pod "pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28": Phase="Pending", Reason="", readiness=false. Elapsed: 10.011785ms Jun 7 21:29:30.850: INFO: Pod "pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014616461s Jun 7 21:29:32.855: INFO: Pod "pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019490377s STEP: Saw pod success Jun 7 21:29:32.855: INFO: Pod "pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28" satisfied condition "success or failure" Jun 7 21:29:32.858: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28 container projected-secret-volume-test: STEP: delete the pod Jun 7 21:29:32.902: INFO: Waiting for pod pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28 to disappear Jun 7 21:29:32.917: INFO: Pod pod-projected-secrets-77b785b7-a81b-4f4d-b6eb-7804d75eaa28 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:29:32.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7579" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1418,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:29:32.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4834/configmap-test-7f3f096f-c758-4707-b098-cd7ffe97efcb STEP: Creating a pod to test consume configMaps Jun 7 21:29:33.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9" in namespace "configmap-4834" to be "success or failure" Jun 7 21:29:33.031: INFO: Pod "pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226437ms Jun 7 21:29:35.064: INFO: Pod "pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041834526s Jun 7 21:29:37.068: INFO: Pod "pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046409581s STEP: Saw pod success Jun 7 21:29:37.068: INFO: Pod "pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9" satisfied condition "success or failure" Jun 7 21:29:37.071: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9 container env-test: STEP: delete the pod Jun 7 21:29:37.149: INFO: Waiting for pod pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9 to disappear Jun 7 21:29:37.151: INFO: Pod pod-configmaps-73e1a8af-b0f3-4f74-9015-0870cc5c05f9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:29:37.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4834" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1422,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:29:37.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jun 7 21:29:37.204: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jun 7 21:29:37.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3977' Jun 7 21:29:37.537: INFO: stderr: "" Jun 7 21:29:37.537: INFO: stdout: "service/agnhost-slave created\n" Jun 7 21:29:37.537: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jun 7 21:29:37.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3977' Jun 7 21:29:37.786: INFO: stderr: "" Jun 7 21:29:37.786: INFO: stdout: "service/agnhost-master created\n" Jun 7 21:29:37.787: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 7 21:29:37.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3977' Jun 7 21:29:38.033: INFO: stderr: "" Jun 7 21:29:38.033: INFO: stdout: "service/frontend created\n" Jun 7 21:29:38.034: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 7 21:29:38.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3977' Jun 7 21:29:38.279: INFO: stderr: "" Jun 7 21:29:38.279: INFO: stdout: "deployment.apps/frontend created\n" Jun 7 21:29:38.279: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 7 21:29:38.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3977' Jun 7 21:29:38.592: INFO: stderr: "" Jun 7 21:29:38.592: INFO: stdout: "deployment.apps/agnhost-master created\n" Jun 7 21:29:38.593: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 7 21:29:38.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3977' Jun 7 21:29:38.840: INFO: stderr: "" Jun 7 21:29:38.840: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jun 7 21:29:38.840: INFO: Waiting for all frontend pods to be Running. Jun 7 21:29:48.891: INFO: Waiting for frontend to serve content. Jun 7 21:29:48.902: INFO: Trying to add a new entry to the guestbook. Jun 7 21:29:48.914: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 7 21:29:48.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3977' Jun 7 21:29:49.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:29:49.211: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jun 7 21:29:49.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3977' Jun 7 21:29:50.624: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:29:50.624: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 7 21:29:50.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3977' Jun 7 21:29:50.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:29:50.756: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 7 21:29:50.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3977' Jun 7 21:29:50.869: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:29:50.869: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 7 21:29:50.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3977' Jun 7 21:29:51.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:29:51.461: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 7 21:29:51.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3977' Jun 7 21:29:51.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:29:51.907: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:29:51.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3977" for this suite. • [SLOW TEST:14.945 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":96,"skipped":1435,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:29:52.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0607 21:29:54.233788 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 21:29:54.233: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:29:54.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5089" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":97,"skipped":1447,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:29:54.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-b1922545-faf9-4947-b974-c1377d3d29fd [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:29:54.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5140" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":98,"skipped":1462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:29:54.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:30:15.271: INFO: Container started at 2020-06-07 21:29:58 +0000 UTC, pod became ready at 2020-06-07 21:30:15 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:30:15.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8575" for this suite. • [SLOW TEST:20.825 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1485,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:30:15.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 7 21:30:15.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3707' Jun 7 21:30:15.418: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 21:30:15.418: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Jun 7 21:30:17.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3707' Jun 7 21:30:17.849: INFO: stderr: "" Jun 7 21:30:17.849: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:30:17.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3707" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":100,"skipped":1499,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:30:17.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jun 7 21:30:18.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5917' Jun 7 21:30:18.312: INFO: stderr: "" Jun 7 21:30:18.312: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 7 21:30:19.317: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:30:19.317: INFO: Found 0 / 1 Jun 7 21:30:20.342: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:30:20.342: INFO: Found 0 / 1 Jun 7 21:30:21.317: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:30:21.317: INFO: Found 0 / 1 Jun 7 21:30:22.318: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:30:22.318: INFO: Found 1 / 1 Jun 7 21:30:22.318: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 7 21:30:22.322: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:30:22.322: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 7 21:30:22.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-7zp29 --namespace=kubectl-5917 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 7 21:30:22.419: INFO: stderr: "" Jun 7 21:30:22.419: INFO: stdout: "pod/agnhost-master-7zp29 patched\n" STEP: checking annotations Jun 7 21:30:22.514: INFO: Selector matched 1 pods for map[app:agnhost] Jun 7 21:30:22.514: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:30:22.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5917" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":101,"skipped":1507,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:30:22.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e12fbce9-07f7-4ad6-adda-4b50d5a4817b STEP: Creating a pod to test consume configMaps Jun 7 21:30:22.637: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0" in namespace "configmap-6041" to be "success or failure" Jun 7 21:30:22.646: INFO: Pod "pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543384ms Jun 7 21:30:24.650: INFO: Pod "pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013003608s Jun 7 21:30:26.776: INFO: Pod "pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.138647898s Jun 7 21:30:28.795: INFO: Pod "pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157138641s STEP: Saw pod success Jun 7 21:30:28.795: INFO: Pod "pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0" satisfied condition "success or failure" Jun 7 21:30:28.807: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0 container configmap-volume-test: STEP: delete the pod Jun 7 21:30:28.832: INFO: Waiting for pod pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0 to disappear Jun 7 21:30:28.840: INFO: Pod pod-configmaps-b5079627-4cc4-4840-8686-d581129b27f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:30:28.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6041" for this suite. • [SLOW TEST:6.313 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1514,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:30:28.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:31:28.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-689" for this suite. • [SLOW TEST:60.095 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:31:28.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:31:29.014: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:31:30.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2513" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":104,"skipped":1550,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:31:30.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 7 21:31:30.272: INFO: Waiting up to 5m0s for pod "pod-3cea0a55-b374-4433-adee-9e46eb865402" in namespace "emptydir-6239" to be "success or failure" Jun 7 21:31:30.275: INFO: Pod "pod-3cea0a55-b374-4433-adee-9e46eb865402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.846504ms Jun 7 21:31:32.280: INFO: Pod "pod-3cea0a55-b374-4433-adee-9e46eb865402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007485023s Jun 7 21:31:34.335: INFO: Pod "pod-3cea0a55-b374-4433-adee-9e46eb865402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062366933s STEP: Saw pod success Jun 7 21:31:34.335: INFO: Pod "pod-3cea0a55-b374-4433-adee-9e46eb865402" satisfied condition "success or failure" Jun 7 21:31:34.338: INFO: Trying to get logs from node jerma-worker pod pod-3cea0a55-b374-4433-adee-9e46eb865402 container test-container: STEP: delete the pod Jun 7 21:31:34.379: INFO: Waiting for pod pod-3cea0a55-b374-4433-adee-9e46eb865402 to disappear Jun 7 21:31:34.502: INFO: Pod pod-3cea0a55-b374-4433-adee-9e46eb865402 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:31:34.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6239" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:31:34.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2208 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 21:31:34.965: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 21:31:59.110: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.254:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2208 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:31:59.110: INFO: >>> kubeConfig: /root/.kube/config I0607 21:31:59.148573 6 log.go:172] (0xc00152a580) (0xc001e05680) Create stream I0607 21:31:59.148609 6 log.go:172] (0xc00152a580) (0xc001e05680) Stream added, broadcasting: 1 I0607 21:31:59.151305 6 log.go:172] (0xc00152a580) Reply frame received for 1 I0607 21:31:59.151360 6 log.go:172] (0xc00152a580) (0xc00274bea0) Create stream I0607 21:31:59.151373 6 log.go:172] (0xc00152a580) (0xc00274bea0) Stream added, broadcasting: 3 I0607 21:31:59.152373 6 log.go:172] (0xc00152a580) Reply frame received for 3 I0607 21:31:59.152404 6 log.go:172] (0xc00152a580) (0xc001e93400) Create stream I0607 21:31:59.152419 6 log.go:172] (0xc00152a580) (0xc001e93400) Stream added, broadcasting: 5 I0607 21:31:59.153560 6 log.go:172] (0xc00152a580) Reply frame received for 5 I0607 21:31:59.273509 6 log.go:172] (0xc00152a580) Data frame received for 3 I0607 21:31:59.273554 6 log.go:172] (0xc00274bea0) (3) Data frame handling I0607 21:31:59.273590 6 log.go:172] (0xc00274bea0) (3) Data frame sent I0607 21:31:59.273606 6 log.go:172] (0xc00152a580) Data frame received for 3 I0607 21:31:59.273620 6 log.go:172] (0xc00274bea0) (3) Data frame handling I0607 21:31:59.273911 6 log.go:172] (0xc00152a580) Data frame received for 5 I0607 21:31:59.273939 6 log.go:172] (0xc001e93400) (5) Data frame handling I0607 21:31:59.276758 6 log.go:172] (0xc00152a580) Data frame received for 1 I0607 21:31:59.276819 6 log.go:172] (0xc001e05680) (1) Data frame handling I0607 21:31:59.276897 6 log.go:172] (0xc001e05680) (1) Data frame sent I0607 21:31:59.276950 6 log.go:172] (0xc00152a580) (0xc001e05680) Stream removed, broadcasting: 1 I0607 21:31:59.277028 6 log.go:172] (0xc00152a580) Go away received I0607 21:31:59.277289 6 log.go:172] (0xc00152a580) (0xc001e05680) Stream removed, broadcasting: 1 I0607 21:31:59.277318 6 log.go:172] (0xc00152a580) (0xc00274bea0) Stream removed, broadcasting: 3 I0607 21:31:59.277333 6 log.go:172] (0xc00152a580) (0xc001e93400) Stream removed, broadcasting: 5 Jun 7 21:31:59.277: INFO: Found all expected endpoints: [netserver-0] Jun 7 21:31:59.280: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.159:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2208 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:31:59.281: INFO: >>> kubeConfig: /root/.kube/config I0607 21:31:59.301309 6 log.go:172] (0xc00090b550) (0xc00282c820) Create stream I0607 21:31:59.301343 6 log.go:172] (0xc00090b550) (0xc00282c820) Stream added, broadcasting: 1 I0607 21:31:59.303022 6 log.go:172] (0xc00090b550) Reply frame received for 1 I0607 21:31:59.303058 6 log.go:172] (0xc00090b550) (0xc00282c960) Create stream I0607 21:31:59.303066 6 log.go:172] (0xc00090b550) (0xc00282c960) Stream added, broadcasting: 3 I0607 21:31:59.303676 6 log.go:172] (0xc00090b550) Reply frame received for 3 I0607 21:31:59.303703 6 log.go:172] (0xc00090b550) (0xc001e935e0) Create stream I0607 21:31:59.303711 6 log.go:172] (0xc00090b550) (0xc001e935e0) Stream added, broadcasting: 5 I0607 21:31:59.304296 6 log.go:172] (0xc00090b550) Reply frame received for 5 I0607 21:31:59.358955 6 log.go:172] (0xc00090b550) Data frame received for 3 I0607 21:31:59.359022 6 log.go:172] (0xc00282c960) (3) Data frame handling I0607 21:31:59.359052 6 log.go:172] (0xc00282c960) (3) Data frame sent I0607 21:31:59.359182 6 log.go:172] (0xc00090b550) Data frame received for 3 I0607 21:31:59.359211 6 log.go:172] (0xc00282c960) (3) Data frame handling I0607 21:31:59.359446 6 log.go:172] (0xc00090b550) Data frame received for 5 I0607 21:31:59.359513 6 log.go:172] (0xc001e935e0) (5) Data frame handling I0607 21:31:59.361015 6 log.go:172] (0xc00090b550) Data frame received for 1 I0607 21:31:59.361047 6 log.go:172] (0xc00282c820) (1) Data frame handling I0607 21:31:59.361059 6 log.go:172] (0xc00282c820) (1) Data frame sent I0607 21:31:59.361074 6 log.go:172] (0xc00090b550) (0xc00282c820) Stream removed, broadcasting: 1 I0607 21:31:59.361096 6 log.go:172] (0xc00090b550) Go away received I0607 21:31:59.361403 6 log.go:172] (0xc00090b550) (0xc00282c820) Stream removed, broadcasting: 1 I0607 21:31:59.361446 6 log.go:172] (0xc00090b550) (0xc00282c960) Stream removed, broadcasting: 3 I0607 21:31:59.361471 6 log.go:172] (0xc00090b550) (0xc001e935e0) Stream removed, broadcasting: 5 Jun 7 21:31:59.361: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:31:59.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2208" for this suite. • [SLOW TEST:24.846 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1576,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:31:59.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:31:59.439: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 7 21:32:02.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8703 create -f -' Jun 7 21:32:07.129: INFO: stderr: "" Jun 7 21:32:07.129: INFO: stdout: "e2e-test-crd-publish-openapi-4346-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 7 21:32:07.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8703 delete e2e-test-crd-publish-openapi-4346-crds test-cr' Jun 7 21:32:07.266: INFO: stderr: "" Jun 7 21:32:07.266: INFO: stdout: "e2e-test-crd-publish-openapi-4346-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 7 21:32:07.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8703 apply -f -' Jun 7 21:32:07.508: INFO: stderr: "" Jun 7 21:32:07.508: INFO: stdout: "e2e-test-crd-publish-openapi-4346-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 7 21:32:07.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8703 delete e2e-test-crd-publish-openapi-4346-crds test-cr' Jun 7 21:32:07.628: INFO: stderr: "" Jun 7 21:32:07.628: INFO: stdout: "e2e-test-crd-publish-openapi-4346-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 7 21:32:07.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4346-crds' Jun 7 21:32:07.884: INFO: stderr: "" Jun 7 21:32:07.884: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4346-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:32:10.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8703" for this suite. • [SLOW TEST:11.451 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":107,"skipped":1586,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:32:10.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:32:10.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5" in namespace "projected-5149" to be "success or failure" Jun 7 21:32:10.894: INFO: Pod "downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.720054ms Jun 7 21:32:12.898: INFO: Pod "downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020879917s Jun 7 21:32:14.903: INFO: Pod "downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025279362s STEP: Saw pod success Jun 7 21:32:14.903: INFO: Pod "downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5" satisfied condition "success or failure" Jun 7 21:32:14.906: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5 container client-container: STEP: delete the pod Jun 7 21:32:14.936: INFO: Waiting for pod downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5 to disappear Jun 7 21:32:14.946: INFO: Pod downwardapi-volume-9a93e430-8add-44f0-be5b-a3219ab95dd5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:32:14.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5149" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:32:14.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jun 7 21:32:19.056: INFO: Pod pod-hostip-9362d40c-6481-4e6d-a1df-f658dafb14a2 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:32:19.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5134" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1660,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:32:19.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-162/configmap-test-a0ef7ad7-6fbc-422b-85a4-69697043f351 STEP: Creating a pod to test consume configMaps Jun 7 21:32:19.128: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86" in namespace "configmap-162" to be "success or failure" Jun 7 21:32:19.132: INFO: Pod "pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303173ms Jun 7 21:32:21.136: INFO: Pod "pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007432295s Jun 7 21:32:23.140: INFO: Pod "pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011590845s STEP: Saw pod success Jun 7 21:32:23.140: INFO: Pod "pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86" satisfied condition "success or failure" Jun 7 21:32:23.143: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86 container env-test: STEP: delete the pod Jun 7 21:32:23.362: INFO: Waiting for pod pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86 to disappear Jun 7 21:32:23.385: INFO: Pod pod-configmaps-6c93ef32-05c5-488a-92d1-130579f13c86 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:32:23.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-162" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1679,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:32:23.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4308.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4308.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 21:32:31.543: INFO: DNS probes using dns-4308/dns-test-bc108b20-3f13-4e8f-9abd-6cd5ee618475 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:32:31.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4308" for this suite. • [SLOW TEST:8.310 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":111,"skipped":1682,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:32:31.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6535 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6535 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6535 Jun 7 21:32:32.209: INFO: Found 0 stateful pods, waiting for 1 Jun 7 21:32:42.214: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 7 21:32:42.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:32:42.535: INFO: stderr: "I0607 21:32:42.359415 1819 log.go:172] (0xc000a0e6e0) (0xc0006d3e00) Create stream\nI0607 21:32:42.359480 1819 log.go:172] (0xc000a0e6e0) (0xc0006d3e00) Stream added, broadcasting: 1\nI0607 21:32:42.362215 1819 log.go:172] (0xc000a0e6e0) Reply frame received for 1\nI0607 21:32:42.362252 1819 log.go:172] (0xc000a0e6e0) (0xc0005fc640) Create stream\nI0607 21:32:42.362261 1819 log.go:172] (0xc000a0e6e0) (0xc0005fc640) Stream added, broadcasting: 3\nI0607 21:32:42.362942 1819 log.go:172] (0xc000a0e6e0) Reply frame received for 3\nI0607 21:32:42.362979 1819 log.go:172] (0xc000a0e6e0) (0xc0006d3ea0) Create stream\nI0607 21:32:42.363001 1819 log.go:172] (0xc000a0e6e0) (0xc0006d3ea0) Stream added, broadcasting: 5\nI0607 21:32:42.363755 1819 log.go:172] (0xc000a0e6e0) Reply frame received for 5\nI0607 21:32:42.444311 1819 log.go:172] (0xc000a0e6e0) Data frame received for 5\nI0607 21:32:42.444342 1819 log.go:172] (0xc0006d3ea0) (5) Data frame handling\nI0607 21:32:42.444367 1819 log.go:172] (0xc0006d3ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:32:42.524983 1819 log.go:172] (0xc000a0e6e0) Data frame received for 3\nI0607 21:32:42.525002 1819 log.go:172] (0xc0005fc640) (3) Data frame handling\nI0607 21:32:42.525013 1819 log.go:172] (0xc0005fc640) (3) Data frame sent\nI0607 21:32:42.525440 1819 log.go:172] (0xc000a0e6e0) Data frame received for 5\nI0607 21:32:42.525461 1819 log.go:172] (0xc0006d3ea0) (5) Data frame handling\nI0607 21:32:42.525679 1819 log.go:172] (0xc000a0e6e0) Data frame received for 3\nI0607 21:32:42.525714 1819 log.go:172] (0xc0005fc640) (3) Data frame handling\nI0607 21:32:42.527812 1819 log.go:172] (0xc000a0e6e0) Data frame received for 1\nI0607 21:32:42.527851 1819 log.go:172] (0xc0006d3e00) (1) Data frame handling\nI0607 21:32:42.527901 1819 log.go:172] (0xc0006d3e00) (1) Data frame sent\nI0607 21:32:42.527940 1819 log.go:172] (0xc000a0e6e0) (0xc0006d3e00) Stream removed, broadcasting: 1\nI0607 21:32:42.527987 1819 log.go:172] (0xc000a0e6e0) Go away received\nI0607 21:32:42.528471 1819 log.go:172] (0xc000a0e6e0) (0xc0006d3e00) Stream removed, broadcasting: 1\nI0607 21:32:42.528502 1819 log.go:172] (0xc000a0e6e0) (0xc0005fc640) Stream removed, broadcasting: 3\nI0607 21:32:42.528517 1819 log.go:172] (0xc000a0e6e0) (0xc0006d3ea0) Stream removed, broadcasting: 5\n" Jun 7 21:32:42.535: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:32:42.535: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:32:42.539: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 7 21:32:52.545: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:32:52.545: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:32:52.584: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999939s Jun 7 21:32:53.588: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.972878268s Jun 7 21:32:54.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.968139419s Jun 7 21:32:55.597: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964626516s Jun 7 21:32:56.602: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.959644623s Jun 7 21:32:57.607: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.954960014s Jun 7 21:32:58.612: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.950023999s Jun 7 21:32:59.617: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.945124195s Jun 7 21:33:00.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.940124016s Jun 7 21:33:01.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 935.480167ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6535 Jun 7 21:33:02.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:33:02.868: INFO: stderr: "I0607 21:33:02.767450 1840 log.go:172] (0xc000a782c0) (0xc000552460) Create stream\nI0607 21:33:02.767533 1840 log.go:172] (0xc000a782c0) (0xc000552460) Stream added, broadcasting: 1\nI0607 21:33:02.770811 1840 log.go:172] (0xc000a782c0) Reply frame received for 1\nI0607 21:33:02.770873 1840 log.go:172] (0xc000a782c0) (0xc00081e000) Create stream\nI0607 21:33:02.770899 1840 log.go:172] (0xc000a782c0) (0xc00081e000) Stream added, broadcasting: 3\nI0607 21:33:02.771766 1840 log.go:172] (0xc000a782c0) Reply frame received for 3\nI0607 21:33:02.771783 1840 log.go:172] (0xc000a782c0) (0xc00081e0a0) Create stream\nI0607 21:33:02.771789 1840 log.go:172] (0xc000a782c0) (0xc00081e0a0) Stream added, broadcasting: 5\nI0607 21:33:02.772606 1840 log.go:172] (0xc000a782c0) Reply frame received for 5\nI0607 21:33:02.860140 1840 log.go:172] (0xc000a782c0) Data frame received for 3\nI0607 21:33:02.860184 1840 log.go:172] (0xc00081e000) (3) Data frame handling\nI0607 21:33:02.860207 1840 log.go:172] (0xc00081e000) (3) Data frame sent\nI0607 21:33:02.860226 1840 log.go:172] (0xc000a782c0) Data frame received for 3\nI0607 21:33:02.860246 1840 log.go:172] (0xc00081e000) (3) Data frame handling\nI0607 21:33:02.860280 1840 log.go:172] (0xc000a782c0) Data frame received for 5\nI0607 21:33:02.860300 1840 log.go:172] (0xc00081e0a0) (5) Data frame handling\nI0607 21:33:02.860319 1840 log.go:172] (0xc00081e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0607 21:33:02.860342 1840 log.go:172] (0xc000a782c0) Data frame received for 5\nI0607 21:33:02.860382 1840 log.go:172] (0xc00081e0a0) (5) Data frame handling\nI0607 21:33:02.862164 1840 log.go:172] (0xc000a782c0) Data frame received for 1\nI0607 21:33:02.862196 1840 log.go:172] (0xc000552460) (1) Data frame handling\nI0607 21:33:02.862220 1840 log.go:172] (0xc000552460) (1) Data frame sent\nI0607 21:33:02.862234 1840 log.go:172] (0xc000a782c0) (0xc000552460) Stream removed, broadcasting: 1\nI0607 21:33:02.862254 1840 log.go:172] (0xc000a782c0) Go away received\nI0607 21:33:02.862694 1840 log.go:172] (0xc000a782c0) (0xc000552460) Stream removed, broadcasting: 1\nI0607 21:33:02.862715 1840 log.go:172] (0xc000a782c0) (0xc00081e000) Stream removed, broadcasting: 3\nI0607 21:33:02.862727 1840 log.go:172] (0xc000a782c0) (0xc00081e0a0) Stream removed, broadcasting: 5\n" Jun 7 21:33:02.868: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:33:02.868: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:33:02.872: INFO: Found 1 stateful pods, waiting for 3 Jun 7 21:33:12.878: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:33:12.878: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:33:12.878: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 7 21:33:12.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:33:13.132: INFO: stderr: "I0607 21:33:13.033445 1860 log.go:172] (0xc0000f4a50) (0xc000924140) Create stream\nI0607 21:33:13.033500 1860 log.go:172] (0xc0000f4a50) (0xc000924140) Stream added, broadcasting: 1\nI0607 21:33:13.035502 1860 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0607 21:33:13.035524 1860 log.go:172] (0xc0000f4a50) (0xc00022d360) Create stream\nI0607 21:33:13.035532 1860 log.go:172] (0xc0000f4a50) (0xc00022d360) Stream added, broadcasting: 3\nI0607 21:33:13.036233 1860 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0607 21:33:13.036265 1860 log.go:172] (0xc0000f4a50) (0xc0009241e0) Create stream\nI0607 21:33:13.036273 1860 log.go:172] (0xc0000f4a50) (0xc0009241e0) Stream added, broadcasting: 5\nI0607 21:33:13.036947 1860 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0607 21:33:13.125870 1860 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0607 21:33:13.125913 1860 log.go:172] (0xc0009241e0) (5) Data frame handling\nI0607 21:33:13.125930 1860 log.go:172] (0xc0009241e0) (5) Data frame sent\nI0607 21:33:13.125941 1860 log.go:172] (0xc0000f4a50) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:33:13.125952 1860 log.go:172] (0xc0009241e0) (5) Data frame handling\nI0607 21:33:13.126028 1860 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0607 21:33:13.126061 1860 log.go:172] (0xc00022d360) (3) Data frame handling\nI0607 21:33:13.126078 1860 log.go:172] (0xc00022d360) (3) Data frame sent\nI0607 21:33:13.126089 1860 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0607 21:33:13.126097 1860 log.go:172] (0xc00022d360) (3) Data frame handling\nI0607 21:33:13.127244 1860 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0607 21:33:13.127264 1860 log.go:172] (0xc000924140) (1) Data frame handling\nI0607 21:33:13.127273 1860 log.go:172] (0xc000924140) (1) Data frame sent\nI0607 21:33:13.127283 1860 log.go:172] (0xc0000f4a50) (0xc000924140) Stream removed, broadcasting: 1\nI0607 21:33:13.127295 1860 log.go:172] (0xc0000f4a50) Go away received\nI0607 21:33:13.127576 1860 log.go:172] (0xc0000f4a50) (0xc000924140) Stream removed, broadcasting: 1\nI0607 21:33:13.127597 1860 log.go:172] (0xc0000f4a50) (0xc00022d360) Stream removed, broadcasting: 3\nI0607 21:33:13.127607 1860 log.go:172] (0xc0000f4a50) (0xc0009241e0) Stream removed, broadcasting: 5\n" Jun 7 21:33:13.132: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:33:13.132: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:33:13.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:33:13.445: INFO: stderr: "I0607 21:33:13.310904 1882 log.go:172] (0xc0007de840) (0xc0007d4000) Create stream\nI0607 21:33:13.310956 1882 log.go:172] (0xc0007de840) (0xc0007d4000) Stream added, broadcasting: 1\nI0607 21:33:13.314536 1882 log.go:172] (0xc0007de840) Reply frame received for 1\nI0607 21:33:13.314593 1882 log.go:172] (0xc0007de840) (0xc0007099a0) Create stream\nI0607 21:33:13.314623 1882 log.go:172] (0xc0007de840) (0xc0007099a0) Stream added, broadcasting: 3\nI0607 21:33:13.315722 1882 log.go:172] (0xc0007de840) Reply frame received for 3\nI0607 21:33:13.315763 1882 log.go:172] (0xc0007de840) (0xc000709b80) Create stream\nI0607 21:33:13.315775 1882 log.go:172] (0xc0007de840) (0xc000709b80) Stream added, broadcasting: 5\nI0607 21:33:13.316836 1882 log.go:172] (0xc0007de840) Reply frame received for 5\nI0607 21:33:13.381529 1882 log.go:172] (0xc0007de840) Data frame received for 5\nI0607 21:33:13.381555 1882 log.go:172] (0xc000709b80) (5) Data frame handling\nI0607 21:33:13.381571 1882 log.go:172] (0xc000709b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:33:13.436155 1882 log.go:172] (0xc0007de840) Data frame received for 5\nI0607 21:33:13.436175 1882 log.go:172] (0xc000709b80) (5) Data frame handling\nI0607 21:33:13.436192 1882 log.go:172] (0xc0007de840) Data frame received for 3\nI0607 21:33:13.436197 1882 log.go:172] (0xc0007099a0) (3) Data frame handling\nI0607 21:33:13.436204 1882 log.go:172] (0xc0007099a0) (3) Data frame sent\nI0607 21:33:13.436208 1882 log.go:172] (0xc0007de840) Data frame received for 3\nI0607 21:33:13.436212 1882 log.go:172] (0xc0007099a0) (3) Data frame handling\nI0607 21:33:13.438888 1882 log.go:172] (0xc0007de840) Data frame received for 1\nI0607 21:33:13.438922 1882 log.go:172] (0xc0007d4000) (1) Data frame handling\nI0607 21:33:13.438953 1882 log.go:172] (0xc0007d4000) (1) Data frame sent\nI0607 21:33:13.438971 1882 log.go:172] (0xc0007de840) (0xc0007d4000) Stream removed, broadcasting: 1\nI0607 21:33:13.439272 1882 log.go:172] (0xc0007de840) Go away received\nI0607 21:33:13.439761 1882 log.go:172] (0xc0007de840) (0xc0007d4000) Stream removed, broadcasting: 1\nI0607 21:33:13.439782 1882 log.go:172] (0xc0007de840) (0xc0007099a0) Stream removed, broadcasting: 3\nI0607 21:33:13.439794 1882 log.go:172] (0xc0007de840) (0xc000709b80) Stream removed, broadcasting: 5\n" Jun 7 21:33:13.445: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:33:13.445: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:33:13.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:33:13.710: INFO: stderr: "I0607 21:33:13.599602 1904 log.go:172] (0xc000b1ad10) (0xc000aba320) Create stream\nI0607 21:33:13.599688 1904 log.go:172] (0xc000b1ad10) (0xc000aba320) Stream added, broadcasting: 1\nI0607 21:33:13.604401 1904 log.go:172] (0xc000b1ad10) Reply frame received for 1\nI0607 21:33:13.604432 1904 log.go:172] (0xc000b1ad10) (0xc00061c5a0) Create stream\nI0607 21:33:13.604440 1904 log.go:172] (0xc000b1ad10) (0xc00061c5a0) Stream added, broadcasting: 3\nI0607 21:33:13.605455 1904 log.go:172] (0xc000b1ad10) Reply frame received for 3\nI0607 21:33:13.605493 1904 log.go:172] (0xc000b1ad10) (0xc000783360) Create stream\nI0607 21:33:13.605502 1904 log.go:172] (0xc000b1ad10) (0xc000783360) Stream added, broadcasting: 5\nI0607 21:33:13.606399 1904 log.go:172] (0xc000b1ad10) Reply frame received for 5\nI0607 21:33:13.671270 1904 log.go:172] (0xc000b1ad10) Data frame received for 5\nI0607 21:33:13.671299 1904 log.go:172] (0xc000783360) (5) Data frame handling\nI0607 21:33:13.671319 1904 log.go:172] (0xc000783360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:33:13.700166 1904 log.go:172] (0xc000b1ad10) Data frame received for 3\nI0607 21:33:13.700197 1904 log.go:172] (0xc00061c5a0) (3) Data frame handling\nI0607 21:33:13.700215 1904 log.go:172] (0xc00061c5a0) (3) Data frame sent\nI0607 21:33:13.701749 1904 log.go:172] (0xc000b1ad10) Data frame received for 5\nI0607 21:33:13.701769 1904 log.go:172] (0xc000783360) (5) Data frame handling\nI0607 21:33:13.701919 1904 log.go:172] (0xc000b1ad10) Data frame received for 3\nI0607 21:33:13.701940 1904 log.go:172] (0xc00061c5a0) (3) Data frame handling\nI0607 21:33:13.703798 1904 log.go:172] (0xc000b1ad10) Data frame received for 1\nI0607 21:33:13.703818 1904 log.go:172] (0xc000aba320) (1) Data frame handling\nI0607 21:33:13.703836 1904 log.go:172] (0xc000aba320) (1) Data frame sent\nI0607 21:33:13.703852 1904 log.go:172] (0xc000b1ad10) (0xc000aba320) Stream removed, broadcasting: 1\nI0607 21:33:13.703869 1904 log.go:172] (0xc000b1ad10) Go away received\nI0607 21:33:13.704249 1904 log.go:172] (0xc000b1ad10) (0xc000aba320) Stream removed, broadcasting: 1\nI0607 21:33:13.704280 1904 log.go:172] (0xc000b1ad10) (0xc00061c5a0) Stream removed, broadcasting: 3\nI0607 21:33:13.704292 1904 log.go:172] (0xc000b1ad10) (0xc000783360) Stream removed, broadcasting: 5\n" Jun 7 21:33:13.710: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:33:13.710: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:33:13.710: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:33:13.719: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 7 21:33:23.726: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:33:23.726: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:33:23.726: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:33:23.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999645s Jun 7 21:33:24.757: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981271833s Jun 7 21:33:25.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975623789s Jun 7 21:33:26.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970394522s Jun 7 21:33:27.774: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.965098412s Jun 7 21:33:28.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959198655s Jun 7 21:33:29.784: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.954281185s Jun 7 21:33:30.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948736396s Jun 7 21:33:31.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943354982s Jun 7 21:33:32.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.873921ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6535 Jun 7 21:33:33.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:33:34.068: INFO: stderr: "I0607 21:33:33.966848 1927 log.go:172] (0xc000ae64d0) (0xc000b68280) Create stream\nI0607 21:33:33.966930 1927 log.go:172] (0xc000ae64d0) (0xc000b68280) Stream added, broadcasting: 1\nI0607 21:33:33.970047 1927 log.go:172] (0xc000ae64d0) Reply frame received for 1\nI0607 21:33:33.970109 1927 log.go:172] (0xc000ae64d0) (0xc000a66000) Create stream\nI0607 21:33:33.970132 1927 log.go:172] (0xc000ae64d0) (0xc000a66000) Stream added, broadcasting: 3\nI0607 21:33:33.971424 1927 log.go:172] (0xc000ae64d0) Reply frame received for 3\nI0607 21:33:33.971463 1927 log.go:172] (0xc000ae64d0) (0xc000b68320) Create stream\nI0607 21:33:33.971477 1927 log.go:172] (0xc000ae64d0) (0xc000b68320) Stream added, broadcasting: 5\nI0607 21:33:33.972766 1927 log.go:172] (0xc000ae64d0) Reply frame received for 5\nI0607 21:33:34.062765 1927 log.go:172] (0xc000ae64d0) Data frame received for 3\nI0607 21:33:34.062825 1927 log.go:172] (0xc000a66000) (3) Data frame handling\nI0607 21:33:34.062843 1927 log.go:172] (0xc000a66000) (3) Data frame sent\nI0607 21:33:34.062854 1927 log.go:172] (0xc000ae64d0) Data frame received for 3\nI0607 21:33:34.062862 1927 log.go:172] (0xc000a66000) (3) Data frame handling\nI0607 21:33:34.062895 1927 log.go:172] (0xc000ae64d0) Data frame received for 5\nI0607 21:33:34.062909 1927 log.go:172] (0xc000b68320) (5) Data frame handling\nI0607 21:33:34.062927 1927 log.go:172] (0xc000b68320) (5) Data frame sent\nI0607 21:33:34.062938 1927 log.go:172] (0xc000ae64d0) Data frame received for 5\nI0607 21:33:34.062946 1927 log.go:172] (0xc000b68320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0607 21:33:34.063919 1927 log.go:172] (0xc000ae64d0) Data frame received for 1\nI0607 21:33:34.063954 1927 log.go:172] (0xc000b68280) (1) Data frame handling\nI0607 21:33:34.063992 1927 log.go:172] (0xc000b68280) (1) Data frame sent\nI0607 21:33:34.064014 1927 log.go:172] (0xc000ae64d0) (0xc000b68280) Stream removed, broadcasting: 1\nI0607 21:33:34.064036 1927 log.go:172] (0xc000ae64d0) Go away received\nI0607 21:33:34.064437 1927 log.go:172] (0xc000ae64d0) (0xc000b68280) Stream removed, broadcasting: 1\nI0607 21:33:34.064457 1927 log.go:172] (0xc000ae64d0) (0xc000a66000) Stream removed, broadcasting: 3\nI0607 21:33:34.064466 1927 log.go:172] (0xc000ae64d0) (0xc000b68320) Stream removed, broadcasting: 5\n" Jun 7 21:33:34.068: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:33:34.068: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:33:34.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:33:34.322: INFO: stderr: "I0607 21:33:34.233731 1947 log.go:172] (0xc000a046e0) (0xc0007db4a0) Create stream\nI0607 21:33:34.233789 1947 log.go:172] (0xc000a046e0) (0xc0007db4a0) Stream added, broadcasting: 1\nI0607 21:33:34.235650 1947 log.go:172] (0xc000a046e0) Reply frame received for 1\nI0607 21:33:34.235697 1947 log.go:172] (0xc000a046e0) (0xc0009c4000) Create stream\nI0607 21:33:34.235712 1947 log.go:172] (0xc000a046e0) (0xc0009c4000) Stream added, broadcasting: 3\nI0607 21:33:34.236478 1947 log.go:172] (0xc000a046e0) Reply frame received for 3\nI0607 21:33:34.236518 1947 log.go:172] (0xc000a046e0) (0xc000992000) Create stream\nI0607 21:33:34.236531 1947 log.go:172] (0xc000a046e0) (0xc000992000) Stream added, broadcasting: 5\nI0607 21:33:34.237477 1947 log.go:172] (0xc000a046e0) Reply frame received for 5\nI0607 21:33:34.315435 1947 log.go:172] (0xc000a046e0) Data frame received for 5\nI0607 21:33:34.315489 1947 log.go:172] (0xc000992000) (5) Data frame handling\nI0607 21:33:34.315506 1947 log.go:172] (0xc000992000) (5) Data frame sent\nI0607 21:33:34.315518 1947 log.go:172] (0xc000a046e0) Data frame received for 5\nI0607 21:33:34.315528 1947 log.go:172] (0xc000992000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0607 21:33:34.315581 1947 log.go:172] (0xc000a046e0) Data frame received for 3\nI0607 21:33:34.315611 1947 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0607 21:33:34.315630 1947 log.go:172] (0xc0009c4000) (3) Data frame sent\nI0607 21:33:34.315641 1947 log.go:172] (0xc000a046e0) Data frame received for 3\nI0607 21:33:34.315665 1947 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0607 21:33:34.317444 1947 log.go:172] (0xc000a046e0) Data frame received for 1\nI0607 21:33:34.317474 1947 log.go:172] (0xc0007db4a0) (1) Data frame handling\nI0607 21:33:34.317493 1947 log.go:172] (0xc0007db4a0) (1) Data frame sent\nI0607 21:33:34.317605 1947 log.go:172] (0xc000a046e0) (0xc0007db4a0) Stream removed, broadcasting: 1\nI0607 21:33:34.317826 1947 log.go:172] (0xc000a046e0) Go away received\nI0607 21:33:34.318022 1947 log.go:172] (0xc000a046e0) (0xc0007db4a0) Stream removed, broadcasting: 1\nI0607 21:33:34.318046 1947 log.go:172] (0xc000a046e0) (0xc0009c4000) Stream removed, broadcasting: 3\nI0607 21:33:34.318069 1947 log.go:172] (0xc000a046e0) (0xc000992000) Stream removed, broadcasting: 5\n" Jun 7 21:33:34.322: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:33:34.322: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:33:34.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6535 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:33:34.544: INFO: stderr: "I0607 21:33:34.451417 1968 log.go:172] (0xc0000f5600) (0xc00066fae0) Create stream\nI0607 21:33:34.451476 1968 log.go:172] (0xc0000f5600) (0xc00066fae0) Stream added, broadcasting: 1\nI0607 21:33:34.454560 1968 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0607 21:33:34.454613 1968 log.go:172] (0xc0000f5600) (0xc000960000) Create stream\nI0607 21:33:34.454625 1968 log.go:172] (0xc0000f5600) (0xc000960000) Stream added, broadcasting: 3\nI0607 21:33:34.455784 1968 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0607 21:33:34.455822 1968 log.go:172] (0xc0000f5600) (0xc00066fcc0) Create stream\nI0607 21:33:34.455836 1968 log.go:172] (0xc0000f5600) (0xc00066fcc0) Stream added, broadcasting: 5\nI0607 21:33:34.458507 1968 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0607 21:33:34.536003 1968 log.go:172] (0xc0000f5600) Data frame received for 5\nI0607 21:33:34.536047 1968 log.go:172] (0xc00066fcc0) (5) Data frame handling\nI0607 21:33:34.536073 1968 log.go:172] (0xc00066fcc0) (5) Data frame sent\nI0607 21:33:34.536099 1968 log.go:172] (0xc0000f5600) Data frame received for 5\nI0607 21:33:34.536109 1968 log.go:172] (0xc00066fcc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0607 21:33:34.536156 1968 log.go:172] (0xc0000f5600) Data frame received for 3\nI0607 21:33:34.536201 1968 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 21:33:34.536258 1968 log.go:172] (0xc000960000) (3) Data frame sent\nI0607 21:33:34.536287 1968 log.go:172] (0xc0000f5600) Data frame received for 3\nI0607 21:33:34.536307 1968 log.go:172] (0xc000960000) (3) Data frame handling\nI0607 21:33:34.537799 1968 log.go:172] (0xc0000f5600) Data frame received for 1\nI0607 21:33:34.537833 1968 log.go:172] (0xc00066fae0) (1) Data frame handling\nI0607 21:33:34.537858 1968 log.go:172] (0xc00066fae0) (1) Data frame sent\nI0607 21:33:34.537883 1968 log.go:172] (0xc0000f5600) (0xc00066fae0) Stream removed, broadcasting: 1\nI0607 21:33:34.537903 1968 log.go:172] (0xc0000f5600) Go away received\nI0607 21:33:34.538292 1968 log.go:172] (0xc0000f5600) (0xc00066fae0) Stream removed, broadcasting: 1\nI0607 21:33:34.538308 1968 log.go:172] (0xc0000f5600) (0xc000960000) Stream removed, broadcasting: 3\nI0607 21:33:34.538316 1968 log.go:172] (0xc0000f5600) (0xc00066fcc0) Stream removed, broadcasting: 5\n" Jun 7 21:33:34.544: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:33:34.544: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:33:34.544: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 7 21:34:14.608: INFO: Deleting all statefulset in ns statefulset-6535 Jun 7 21:34:14.612: INFO: Scaling statefulset ss to 0 Jun 7 21:34:14.620: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:34:14.622: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:34:14.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6535" for this suite. • [SLOW TEST:102.938 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":112,"skipped":1683,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:34:14.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jun 7 21:34:14.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5549' Jun 7 21:34:15.029: INFO: stderr: "" Jun 7 21:34:15.029: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 21:34:15.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5549' Jun 7 21:34:15.166: INFO: stderr: "" Jun 7 21:34:15.166: INFO: stdout: "update-demo-nautilus-2m6f2 update-demo-nautilus-8qrgf " Jun 7 21:34:15.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2m6f2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:15.295: INFO: stderr: "" Jun 7 21:34:15.295: INFO: stdout: "" Jun 7 21:34:15.295: INFO: update-demo-nautilus-2m6f2 is created but not running Jun 7 21:34:20.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5549' Jun 7 21:34:20.401: INFO: stderr: "" Jun 7 21:34:20.401: INFO: stdout: "update-demo-nautilus-2m6f2 update-demo-nautilus-8qrgf " Jun 7 21:34:20.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2m6f2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:20.511: INFO: stderr: "" Jun 7 21:34:20.511: INFO: stdout: "true" Jun 7 21:34:20.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2m6f2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:20.601: INFO: stderr: "" Jun 7 21:34:20.601: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:34:20.601: INFO: validating pod update-demo-nautilus-2m6f2 Jun 7 21:34:20.606: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:34:20.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:34:20.606: INFO: update-demo-nautilus-2m6f2 is verified up and running Jun 7 21:34:20.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qrgf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:20.706: INFO: stderr: "" Jun 7 21:34:20.706: INFO: stdout: "true" Jun 7 21:34:20.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qrgf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:20.804: INFO: stderr: "" Jun 7 21:34:20.804: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:34:20.804: INFO: validating pod update-demo-nautilus-8qrgf Jun 7 21:34:20.809: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:34:20.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:34:20.809: INFO: update-demo-nautilus-8qrgf is verified up and running STEP: rolling-update to new replication controller Jun 7 21:34:20.811: INFO: scanned /root for discovery docs: Jun 7 21:34:20.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5549' Jun 7 21:34:43.470: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 7 21:34:43.471: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 21:34:43.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5549' Jun 7 21:34:43.565: INFO: stderr: "" Jun 7 21:34:43.565: INFO: stdout: "update-demo-kitten-46ns2 update-demo-kitten-vx5zz " Jun 7 21:34:43.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-46ns2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:43.660: INFO: stderr: "" Jun 7 21:34:43.660: INFO: stdout: "true" Jun 7 21:34:43.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-46ns2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:43.754: INFO: stderr: "" Jun 7 21:34:43.754: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 7 21:34:43.754: INFO: validating pod update-demo-kitten-46ns2 Jun 7 21:34:43.766: INFO: got data: { "image": "kitten.jpg" } Jun 7 21:34:43.766: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 7 21:34:43.766: INFO: update-demo-kitten-46ns2 is verified up and running Jun 7 21:34:43.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vx5zz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:43.860: INFO: stderr: "" Jun 7 21:34:43.860: INFO: stdout: "true" Jun 7 21:34:43.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vx5zz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5549' Jun 7 21:34:43.954: INFO: stderr: "" Jun 7 21:34:43.954: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 7 21:34:43.954: INFO: validating pod update-demo-kitten-vx5zz Jun 7 21:34:43.964: INFO: got data: { "image": "kitten.jpg" } Jun 7 21:34:43.964: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 7 21:34:43.964: INFO: update-demo-kitten-vx5zz is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:34:43.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5549" for this suite. • [SLOW TEST:29.329 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":113,"skipped":1685,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:34:43.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 7 21:34:44.041: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 7 21:34:49.086: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:34:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5123" for this suite. • [SLOW TEST:5.478 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":114,"skipped":1687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:34:49.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 7 21:34:49.764: INFO: Waiting up to 5m0s for pod "pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed" in namespace "emptydir-3576" to be "success or failure" Jun 7 21:34:49.810: INFO: Pod "pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed": Phase="Pending", Reason="", readiness=false. Elapsed: 46.537017ms Jun 7 21:34:51.930: INFO: Pod "pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16631192s Jun 7 21:34:53.935: INFO: Pod "pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170846442s Jun 7 21:34:55.944: INFO: Pod "pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179636487s STEP: Saw pod success Jun 7 21:34:55.944: INFO: Pod "pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed" satisfied condition "success or failure" Jun 7 21:34:55.946: INFO: Trying to get logs from node jerma-worker2 pod pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed container test-container: STEP: delete the pod Jun 7 21:34:55.970: INFO: Waiting for pod pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed to disappear Jun 7 21:34:55.979: INFO: Pod pod-8b9f7a77-ad13-4713-81cd-8b9ada0a74ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:34:55.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3576" for this suite. • [SLOW TEST:6.541 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:34:55.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 7 21:34:56.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4897' Jun 7 21:34:56.189: INFO: stderr: "" Jun 7 21:34:56.189: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Jun 7 21:34:56.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4897' Jun 7 21:34:59.292: INFO: stderr: "" Jun 7 21:34:59.292: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:34:59.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4897" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":116,"skipped":1797,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:34:59.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 7 21:34:59.373: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5164 /api/v1/namespaces/watch-5164/configmaps/e2e-watch-test-label-changed 9bfbb24e-e3af-4b0e-a3c4-f5f83287f0ed 22530932 0 2020-06-07 21:34:59 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 21:34:59.374: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5164 /api/v1/namespaces/watch-5164/configmaps/e2e-watch-test-label-changed 9bfbb24e-e3af-4b0e-a3c4-f5f83287f0ed 22530933 0 2020-06-07 21:34:59 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 7 21:34:59.374: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5164 /api/v1/namespaces/watch-5164/configmaps/e2e-watch-test-label-changed 9bfbb24e-e3af-4b0e-a3c4-f5f83287f0ed 22530934 0 2020-06-07 21:34:59 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 7 21:35:09.421: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5164 /api/v1/namespaces/watch-5164/configmaps/e2e-watch-test-label-changed 9bfbb24e-e3af-4b0e-a3c4-f5f83287f0ed 22530982 0 2020-06-07 21:34:59 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 21:35:09.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5164 /api/v1/namespaces/watch-5164/configmaps/e2e-watch-test-label-changed 9bfbb24e-e3af-4b0e-a3c4-f5f83287f0ed 22530983 0 2020-06-07 21:34:59 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 7 21:35:09.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5164 /api/v1/namespaces/watch-5164/configmaps/e2e-watch-test-label-changed 9bfbb24e-e3af-4b0e-a3c4-f5f83287f0ed 22530984 0 2020-06-07 21:34:59 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:35:09.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5164" for this suite. • [SLOW TEST:10.130 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":117,"skipped":1856,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:35:09.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3294 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jun 7 21:35:09.486: INFO: Found 0 stateful pods, waiting for 3 Jun 7 21:35:19.491: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:35:19.492: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:35:19.492: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 7 21:35:29.491: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:35:29.491: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:35:29.491: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:35:29.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3294 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:35:29.816: INFO: stderr: "I0607 21:35:29.642858 2323 log.go:172] (0xc0008f4840) (0xc000b96000) Create stream\nI0607 21:35:29.642914 2323 log.go:172] (0xc0008f4840) (0xc000b96000) Stream added, broadcasting: 1\nI0607 21:35:29.645718 2323 log.go:172] (0xc0008f4840) Reply frame received for 1\nI0607 21:35:29.645785 2323 log.go:172] (0xc0008f4840) (0xc0006d3b80) Create stream\nI0607 21:35:29.645823 2323 log.go:172] (0xc0008f4840) (0xc0006d3b80) Stream added, broadcasting: 3\nI0607 21:35:29.647195 2323 log.go:172] (0xc0008f4840) Reply frame received for 3\nI0607 21:35:29.647242 2323 log.go:172] (0xc0008f4840) (0xc0002ae000) Create stream\nI0607 21:35:29.647256 2323 log.go:172] (0xc0008f4840) (0xc0002ae000) Stream added, broadcasting: 5\nI0607 21:35:29.648429 2323 log.go:172] (0xc0008f4840) Reply frame received for 5\nI0607 21:35:29.740552 2323 log.go:172] (0xc0008f4840) Data frame received for 5\nI0607 21:35:29.740583 2323 log.go:172] (0xc0002ae000) (5) Data frame handling\nI0607 21:35:29.740609 2323 log.go:172] (0xc0002ae000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:35:29.806006 2323 log.go:172] (0xc0008f4840) Data frame received for 5\nI0607 21:35:29.806049 2323 log.go:172] (0xc0002ae000) (5) Data frame handling\nI0607 21:35:29.806075 2323 log.go:172] (0xc0008f4840) Data frame received for 3\nI0607 21:35:29.806087 2323 log.go:172] (0xc0006d3b80) (3) Data frame handling\nI0607 21:35:29.806100 2323 log.go:172] (0xc0006d3b80) (3) Data frame sent\nI0607 21:35:29.806116 2323 log.go:172] (0xc0008f4840) Data frame received for 3\nI0607 21:35:29.806131 2323 log.go:172] (0xc0006d3b80) (3) Data frame handling\nI0607 21:35:29.807549 2323 log.go:172] (0xc0008f4840) Data frame received for 1\nI0607 21:35:29.807637 2323 log.go:172] (0xc000b96000) (1) Data frame handling\nI0607 21:35:29.807712 2323 log.go:172] (0xc000b96000) (1) Data frame sent\nI0607 21:35:29.807741 2323 log.go:172] (0xc0008f4840) (0xc000b96000) Stream removed, broadcasting: 1\nI0607 21:35:29.807761 2323 log.go:172] (0xc0008f4840) Go away received\nI0607 21:35:29.808248 2323 log.go:172] (0xc0008f4840) (0xc000b96000) Stream removed, broadcasting: 1\nI0607 21:35:29.808288 2323 log.go:172] (0xc0008f4840) (0xc0006d3b80) Stream removed, broadcasting: 3\nI0607 21:35:29.808314 2323 log.go:172] (0xc0008f4840) (0xc0002ae000) Stream removed, broadcasting: 5\n" Jun 7 21:35:29.816: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:35:29.816: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 7 21:35:39.924: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 7 21:35:49.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3294 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:35:50.207: INFO: stderr: "I0607 21:35:50.098979 2345 log.go:172] (0xc000586dc0) (0xc0006e5b80) Create stream\nI0607 21:35:50.099039 2345 log.go:172] (0xc000586dc0) (0xc0006e5b80) Stream added, broadcasting: 1\nI0607 21:35:50.101770 2345 log.go:172] (0xc000586dc0) Reply frame received for 1\nI0607 21:35:50.101824 2345 log.go:172] (0xc000586dc0) (0xc0009ec000) Create stream\nI0607 21:35:50.101840 2345 log.go:172] (0xc000586dc0) (0xc0009ec000) Stream added, broadcasting: 3\nI0607 21:35:50.102981 2345 log.go:172] (0xc000586dc0) Reply frame received for 3\nI0607 21:35:50.103026 2345 log.go:172] (0xc000586dc0) (0xc0006e5d60) Create stream\nI0607 21:35:50.103040 2345 log.go:172] (0xc000586dc0) (0xc0006e5d60) Stream added, broadcasting: 5\nI0607 21:35:50.103907 2345 log.go:172] (0xc000586dc0) Reply frame received for 5\nI0607 21:35:50.199097 2345 log.go:172] (0xc000586dc0) Data frame received for 5\nI0607 21:35:50.199136 2345 log.go:172] (0xc0006e5d60) (5) Data frame handling\nI0607 21:35:50.199149 2345 log.go:172] (0xc0006e5d60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0607 21:35:50.199162 2345 log.go:172] (0xc000586dc0) Data frame received for 3\nI0607 21:35:50.199229 2345 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0607 21:35:50.199267 2345 log.go:172] (0xc0009ec000) (3) Data frame sent\nI0607 21:35:50.199290 2345 log.go:172] (0xc000586dc0) Data frame received for 3\nI0607 21:35:50.199308 2345 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0607 21:35:50.199331 2345 log.go:172] (0xc000586dc0) Data frame received for 5\nI0607 21:35:50.199350 2345 log.go:172] (0xc0006e5d60) (5) Data frame handling\nI0607 21:35:50.200991 2345 log.go:172] (0xc000586dc0) Data frame received for 1\nI0607 21:35:50.201028 2345 log.go:172] (0xc0006e5b80) (1) Data frame handling\nI0607 21:35:50.201060 2345 log.go:172] (0xc0006e5b80) (1) Data frame sent\nI0607 21:35:50.201089 2345 log.go:172] (0xc000586dc0) (0xc0006e5b80) Stream removed, broadcasting: 1\nI0607 21:35:50.201317 2345 log.go:172] (0xc000586dc0) Go away received\nI0607 21:35:50.201712 2345 log.go:172] (0xc000586dc0) (0xc0006e5b80) Stream removed, broadcasting: 1\nI0607 21:35:50.201745 2345 log.go:172] (0xc000586dc0) (0xc0009ec000) Stream removed, broadcasting: 3\nI0607 21:35:50.201766 2345 log.go:172] (0xc000586dc0) (0xc0006e5d60) Stream removed, broadcasting: 5\n" Jun 7 21:35:50.207: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:35:50.207: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:36:10.239: INFO: Waiting for StatefulSet statefulset-3294/ss2 to complete update STEP: Rolling back to a previous revision Jun 7 21:36:20.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3294 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:36:20.513: INFO: stderr: "I0607 21:36:20.375260 2365 log.go:172] (0xc000af80b0) (0xc000649e00) Create stream\nI0607 21:36:20.375330 2365 log.go:172] (0xc000af80b0) (0xc000649e00) Stream added, broadcasting: 1\nI0607 21:36:20.377841 2365 log.go:172] (0xc000af80b0) Reply frame received for 1\nI0607 21:36:20.377872 2365 log.go:172] (0xc000af80b0) (0xc00054e780) Create stream\nI0607 21:36:20.377881 2365 log.go:172] (0xc000af80b0) (0xc00054e780) Stream added, broadcasting: 3\nI0607 21:36:20.378668 2365 log.go:172] (0xc000af80b0) Reply frame received for 3\nI0607 21:36:20.378699 2365 log.go:172] (0xc000af80b0) (0xc000649ea0) Create stream\nI0607 21:36:20.378709 2365 log.go:172] (0xc000af80b0) (0xc000649ea0) Stream added, broadcasting: 5\nI0607 21:36:20.379517 2365 log.go:172] (0xc000af80b0) Reply frame received for 5\nI0607 21:36:20.469098 2365 log.go:172] (0xc000af80b0) Data frame received for 5\nI0607 21:36:20.469458 2365 log.go:172] (0xc000649ea0) (5) Data frame handling\nI0607 21:36:20.469534 2365 log.go:172] (0xc000649ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:36:20.507551 2365 log.go:172] (0xc000af80b0) Data frame received for 5\nI0607 21:36:20.507582 2365 log.go:172] (0xc000649ea0) (5) Data frame handling\nI0607 21:36:20.507607 2365 log.go:172] (0xc000af80b0) Data frame received for 3\nI0607 21:36:20.507615 2365 log.go:172] (0xc00054e780) (3) Data frame handling\nI0607 21:36:20.507621 2365 log.go:172] (0xc00054e780) (3) Data frame sent\nI0607 21:36:20.507689 2365 log.go:172] (0xc000af80b0) Data frame received for 3\nI0607 21:36:20.507702 2365 log.go:172] (0xc00054e780) (3) Data frame handling\nI0607 21:36:20.508770 2365 log.go:172] (0xc000af80b0) Data frame received for 1\nI0607 21:36:20.508786 2365 log.go:172] (0xc000649e00) (1) Data frame handling\nI0607 21:36:20.508799 2365 log.go:172] (0xc000649e00) (1) Data frame sent\nI0607 21:36:20.508817 2365 log.go:172] (0xc000af80b0) (0xc000649e00) Stream removed, broadcasting: 1\nI0607 21:36:20.508832 2365 log.go:172] (0xc000af80b0) Go away received\nI0607 21:36:20.509272 2365 log.go:172] (0xc000af80b0) (0xc000649e00) Stream removed, broadcasting: 1\nI0607 21:36:20.509284 2365 log.go:172] (0xc000af80b0) (0xc00054e780) Stream removed, broadcasting: 3\nI0607 21:36:20.509289 2365 log.go:172] (0xc000af80b0) (0xc000649ea0) Stream removed, broadcasting: 5\n" Jun 7 21:36:20.513: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:36:20.513: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:36:30.543: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 7 21:36:40.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3294 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:36:40.825: INFO: stderr: "I0607 21:36:40.728855 2388 log.go:172] (0xc000716000) (0xc0007fe000) Create stream\nI0607 21:36:40.728914 2388 log.go:172] (0xc000716000) (0xc0007fe000) Stream added, broadcasting: 1\nI0607 21:36:40.730882 2388 log.go:172] (0xc000716000) Reply frame received for 1\nI0607 21:36:40.730926 2388 log.go:172] (0xc000716000) (0xc000609ea0) Create stream\nI0607 21:36:40.730940 2388 log.go:172] (0xc000716000) (0xc000609ea0) Stream added, broadcasting: 3\nI0607 21:36:40.731848 2388 log.go:172] (0xc000716000) Reply frame received for 3\nI0607 21:36:40.731895 2388 log.go:172] (0xc000716000) (0xc000609f40) Create stream\nI0607 21:36:40.731908 2388 log.go:172] (0xc000716000) (0xc000609f40) Stream added, broadcasting: 5\nI0607 21:36:40.732842 2388 log.go:172] (0xc000716000) Reply frame received for 5\nI0607 21:36:40.818087 2388 log.go:172] (0xc000716000) Data frame received for 3\nI0607 21:36:40.818124 2388 log.go:172] (0xc000716000) Data frame received for 5\nI0607 21:36:40.818147 2388 log.go:172] (0xc000609f40) (5) Data frame handling\nI0607 21:36:40.818169 2388 log.go:172] (0xc000609f40) (5) Data frame sent\nI0607 21:36:40.818189 2388 log.go:172] (0xc000716000) Data frame received for 5\nI0607 21:36:40.818199 2388 log.go:172] (0xc000609f40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0607 21:36:40.818212 2388 log.go:172] (0xc000609ea0) (3) Data frame handling\nI0607 21:36:40.818256 2388 log.go:172] (0xc000609ea0) (3) Data frame sent\nI0607 21:36:40.818270 2388 log.go:172] (0xc000716000) Data frame received for 3\nI0607 21:36:40.818277 2388 log.go:172] (0xc000609ea0) (3) Data frame handling\nI0607 21:36:40.819437 2388 log.go:172] (0xc000716000) Data frame received for 1\nI0607 21:36:40.819464 2388 log.go:172] (0xc0007fe000) (1) Data frame handling\nI0607 21:36:40.819479 2388 log.go:172] (0xc0007fe000) (1) Data frame sent\nI0607 21:36:40.819495 2388 log.go:172] (0xc000716000) (0xc0007fe000) Stream removed, broadcasting: 1\nI0607 21:36:40.819522 2388 log.go:172] (0xc000716000) Go away received\nI0607 21:36:40.819755 2388 log.go:172] (0xc000716000) (0xc0007fe000) Stream removed, broadcasting: 1\nI0607 21:36:40.819768 2388 log.go:172] (0xc000716000) (0xc000609ea0) Stream removed, broadcasting: 3\nI0607 21:36:40.819774 2388 log.go:172] (0xc000716000) (0xc000609f40) Stream removed, broadcasting: 5\n" Jun 7 21:36:40.825: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:36:40.825: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:36:51.327: INFO: Waiting for StatefulSet statefulset-3294/ss2 to complete update Jun 7 21:36:51.327: INFO: Waiting for Pod statefulset-3294/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 7 21:36:51.327: INFO: Waiting for Pod statefulset-3294/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 7 21:37:01.334: INFO: Waiting for StatefulSet statefulset-3294/ss2 to complete update Jun 7 21:37:01.334: INFO: Waiting for Pod statefulset-3294/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 7 21:37:11.334: INFO: Deleting all statefulset in ns statefulset-3294 Jun 7 21:37:11.337: INFO: Scaling statefulset ss2 to 0 Jun 7 21:37:31.365: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:37:31.369: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:37:31.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3294" for this suite. • [SLOW TEST:141.981 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":118,"skipped":1861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:37:31.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6983 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6983 STEP: creating replication controller externalsvc in namespace services-6983 I0607 21:37:31.711167 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6983, replica count: 2 I0607 21:37:34.761576 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:37:37.761838 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 7 21:37:37.807: INFO: Creating new exec pod Jun 7 21:37:41.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6983 execpod8w5r5 -- /bin/sh -x -c nslookup clusterip-service' Jun 7 21:37:42.170: INFO: stderr: "I0607 21:37:41.993731 2408 log.go:172] (0xc0000f42c0) (0xc000bbe000) Create stream\nI0607 21:37:41.993786 2408 log.go:172] (0xc0000f42c0) (0xc000bbe000) Stream added, broadcasting: 1\nI0607 21:37:41.996021 2408 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0607 21:37:41.996069 2408 log.go:172] (0xc0000f42c0) (0xc000bbe0a0) Create stream\nI0607 21:37:41.996084 2408 log.go:172] (0xc0000f42c0) (0xc000bbe0a0) Stream added, broadcasting: 3\nI0607 21:37:41.996827 2408 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0607 21:37:41.996862 2408 log.go:172] (0xc0000f42c0) (0xc0007735e0) Create stream\nI0607 21:37:41.996878 2408 log.go:172] (0xc0000f42c0) (0xc0007735e0) Stream added, broadcasting: 5\nI0607 21:37:41.998202 2408 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0607 21:37:42.074791 2408 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0607 21:37:42.074829 2408 log.go:172] (0xc0007735e0) (5) Data frame handling\nI0607 21:37:42.074848 2408 log.go:172] (0xc0007735e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0607 21:37:42.159968 2408 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0607 21:37:42.160013 2408 log.go:172] (0xc000bbe0a0) (3) Data frame handling\nI0607 21:37:42.160047 2408 log.go:172] (0xc000bbe0a0) (3) Data frame sent\nI0607 21:37:42.161071 2408 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0607 21:37:42.161091 2408 log.go:172] (0xc000bbe0a0) (3) Data frame handling\nI0607 21:37:42.161282 2408 log.go:172] (0xc000bbe0a0) (3) Data frame sent\nI0607 21:37:42.162127 2408 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0607 21:37:42.162141 2408 log.go:172] (0xc000bbe0a0) (3) Data frame handling\nI0607 21:37:42.162360 2408 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0607 21:37:42.162373 2408 log.go:172] (0xc0007735e0) (5) Data frame handling\nI0607 21:37:42.164136 2408 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0607 21:37:42.164163 2408 log.go:172] (0xc000bbe000) (1) Data frame handling\nI0607 21:37:42.164187 2408 log.go:172] (0xc000bbe000) (1) Data frame sent\nI0607 21:37:42.164201 2408 log.go:172] (0xc0000f42c0) (0xc000bbe000) Stream removed, broadcasting: 1\nI0607 21:37:42.164224 2408 log.go:172] (0xc0000f42c0) Go away received\nI0607 21:37:42.164632 2408 log.go:172] (0xc0000f42c0) (0xc000bbe000) Stream removed, broadcasting: 1\nI0607 21:37:42.164655 2408 log.go:172] (0xc0000f42c0) (0xc000bbe0a0) Stream removed, broadcasting: 3\nI0607 21:37:42.164669 2408 log.go:172] (0xc0000f42c0) (0xc0007735e0) Stream removed, broadcasting: 5\n" Jun 7 21:37:42.171: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6983.svc.cluster.local\tcanonical name = externalsvc.services-6983.svc.cluster.local.\nName:\texternalsvc.services-6983.svc.cluster.local\nAddress: 10.105.25.66\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6983, will wait for the garbage collector to delete the pods Jun 7 21:37:42.231: INFO: Deleting ReplicationController externalsvc took: 7.019618ms Jun 7 21:37:42.331: INFO: Terminating ReplicationController externalsvc pods took: 100.230895ms Jun 7 21:37:49.565: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:37:49.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6983" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.222 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":119,"skipped":1889,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:37:49.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:37:49.675: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 7 21:37:51.718: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:37:53.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7374" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":120,"skipped":1891,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:37:54.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 7 21:37:54.255: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 21:37:54.269: INFO: Waiting for terminating namespaces to be deleted... Jun 7 21:37:54.272: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 7 21:37:54.291: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.291: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 21:37:54.291: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.291: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 21:37:54.291: INFO: condition-test-g8r68 from replication-controller-7374 started at 2020-06-07 21:37:50 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.291: INFO: Container httpd ready: false, restart count 0 Jun 7 21:37:54.291: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 7 21:37:54.310: INFO: execpod8w5r5 from services-6983 started at 2020-06-07 21:37:37 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.310: INFO: Container agnhost-pause ready: true, restart count 0 Jun 7 21:37:54.310: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.310: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 21:37:54.310: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.310: INFO: Container kube-bench ready: false, restart count 0 Jun 7 21:37:54.310: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.310: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 21:37:54.310: INFO: condition-test-5bw9c from replication-controller-7374 started at 2020-06-07 21:37:50 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.310: INFO: Container httpd ready: false, restart count 0 Jun 7 21:37:54.310: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 7 21:37:54.310: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fdcd535e-8fef-44a1-b141-615865ec4308 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-fdcd535e-8fef-44a1-b141-615865ec4308 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-fdcd535e-8fef-44a1-b141-615865ec4308 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:38:04.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5837" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.541 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":121,"skipped":1897,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:38:04.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:38:04.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c" in namespace "downward-api-7254" to be "success or failure" Jun 7 21:38:04.761: INFO: Pod "downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.926973ms Jun 7 21:38:06.765: INFO: Pod "downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042333606s Jun 7 21:38:08.770: INFO: Pod "downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047280325s STEP: Saw pod success Jun 7 21:38:08.770: INFO: Pod "downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c" satisfied condition "success or failure" Jun 7 21:38:08.774: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c container client-container: STEP: delete the pod Jun 7 21:38:08.800: INFO: Waiting for pod downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c to disappear Jun 7 21:38:08.804: INFO: Pod downwardapi-volume-0822281d-0afd-4f4f-b6c9-3ab3e617563c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:38:08.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7254" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1901,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:38:08.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 7 21:38:20.990: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:20.990: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.017647 6 log.go:172] (0xc0026c1810) (0xc0015ddb80) Create stream I0607 21:38:21.017676 6 log.go:172] (0xc0026c1810) (0xc0015ddb80) Stream added, broadcasting: 1 I0607 21:38:21.019576 6 log.go:172] (0xc0026c1810) Reply frame received for 1 I0607 21:38:21.019631 6 log.go:172] (0xc0026c1810) (0xc002260140) Create stream I0607 21:38:21.019647 6 log.go:172] (0xc0026c1810) (0xc002260140) Stream added, broadcasting: 3 I0607 21:38:21.020667 6 log.go:172] (0xc0026c1810) Reply frame received for 3 I0607 21:38:21.020704 6 log.go:172] (0xc0026c1810) (0xc0015ddea0) Create stream I0607 21:38:21.020722 6 log.go:172] (0xc0026c1810) (0xc0015ddea0) Stream added, broadcasting: 5 I0607 21:38:21.022107 6 log.go:172] (0xc0026c1810) Reply frame received for 5 I0607 21:38:21.088345 6 log.go:172] (0xc0026c1810) Data frame received for 3 I0607 21:38:21.088394 6 log.go:172] (0xc002260140) (3) Data frame handling I0607 21:38:21.088419 6 log.go:172] (0xc002260140) (3) Data frame sent I0607 21:38:21.088449 6 log.go:172] (0xc0026c1810) Data frame received for 3 I0607 21:38:21.088484 6 log.go:172] (0xc002260140) (3) Data frame handling I0607 21:38:21.088508 6 log.go:172] (0xc0026c1810) Data frame received for 5 I0607 21:38:21.088530 6 log.go:172] (0xc0015ddea0) (5) Data frame handling I0607 21:38:21.090367 6 log.go:172] (0xc0026c1810) Data frame received for 1 I0607 21:38:21.090433 6 log.go:172] (0xc0015ddb80) (1) Data frame handling I0607 21:38:21.090535 6 log.go:172] (0xc0015ddb80) (1) Data frame sent I0607 21:38:21.090557 6 log.go:172] (0xc0026c1810) (0xc0015ddb80) Stream removed, broadcasting: 1 I0607 21:38:21.090589 6 log.go:172] (0xc0026c1810) Go away received I0607 21:38:21.090766 6 log.go:172] (0xc0026c1810) (0xc0015ddb80) Stream removed, broadcasting: 1 I0607 21:38:21.090791 6 log.go:172] (0xc0026c1810) (0xc002260140) Stream removed, broadcasting: 3 I0607 21:38:21.090806 6 log.go:172] (0xc0026c1810) (0xc0015ddea0) Stream removed, broadcasting: 5 Jun 7 21:38:21.090: INFO: Exec stderr: "" Jun 7 21:38:21.090: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.090: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.124642 6 log.go:172] (0xc000b2e370) (0xc0015c4f00) Create stream I0607 21:38:21.124700 6 log.go:172] (0xc000b2e370) (0xc0015c4f00) Stream added, broadcasting: 1 I0607 21:38:21.127048 6 log.go:172] (0xc000b2e370) Reply frame received for 1 I0607 21:38:21.127096 6 log.go:172] (0xc000b2e370) (0xc00282cfa0) Create stream I0607 21:38:21.127113 6 log.go:172] (0xc000b2e370) (0xc00282cfa0) Stream added, broadcasting: 3 I0607 21:38:21.128251 6 log.go:172] (0xc000b2e370) Reply frame received for 3 I0607 21:38:21.128297 6 log.go:172] (0xc000b2e370) (0xc00282d0e0) Create stream I0607 21:38:21.128310 6 log.go:172] (0xc000b2e370) (0xc00282d0e0) Stream added, broadcasting: 5 I0607 21:38:21.129635 6 log.go:172] (0xc000b2e370) Reply frame received for 5 I0607 21:38:21.195744 6 log.go:172] (0xc000b2e370) Data frame received for 5 I0607 21:38:21.195772 6 log.go:172] (0xc000b2e370) Data frame received for 3 I0607 21:38:21.195796 6 log.go:172] (0xc00282cfa0) (3) Data frame handling I0607 21:38:21.195810 6 log.go:172] (0xc00282cfa0) (3) Data frame sent I0607 21:38:21.195822 6 log.go:172] (0xc00282d0e0) (5) Data frame handling I0607 21:38:21.195943 6 log.go:172] (0xc000b2e370) Data frame received for 3 I0607 21:38:21.195984 6 log.go:172] (0xc00282cfa0) (3) Data frame handling I0607 21:38:21.197879 6 log.go:172] (0xc000b2e370) Data frame received for 1 I0607 21:38:21.197904 6 log.go:172] (0xc0015c4f00) (1) Data frame handling I0607 21:38:21.197915 6 log.go:172] (0xc0015c4f00) (1) Data frame sent I0607 21:38:21.197948 6 log.go:172] (0xc000b2e370) (0xc0015c4f00) Stream removed, broadcasting: 1 I0607 21:38:21.198033 6 log.go:172] (0xc000b2e370) (0xc0015c4f00) Stream removed, broadcasting: 1 I0607 21:38:21.198046 6 log.go:172] (0xc000b2e370) (0xc00282cfa0) Stream removed, broadcasting: 3 I0607 21:38:21.198139 6 log.go:172] (0xc000b2e370) Go away received I0607 21:38:21.198260 6 log.go:172] (0xc000b2e370) (0xc00282d0e0) Stream removed, broadcasting: 5 Jun 7 21:38:21.198: INFO: Exec stderr: "" Jun 7 21:38:21.198: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.198: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.227414 6 log.go:172] (0xc00090a000) (0xc0027b4000) Create stream I0607 21:38:21.227444 6 log.go:172] (0xc00090a000) (0xc0027b4000) Stream added, broadcasting: 1 I0607 21:38:21.228997 6 log.go:172] (0xc00090a000) Reply frame received for 1 I0607 21:38:21.229033 6 log.go:172] (0xc00090a000) (0xc0013c0000) Create stream I0607 21:38:21.229041 6 log.go:172] (0xc00090a000) (0xc0013c0000) Stream added, broadcasting: 3 I0607 21:38:21.230118 6 log.go:172] (0xc00090a000) Reply frame received for 3 I0607 21:38:21.230289 6 log.go:172] (0xc00090a000) (0xc00282d4a0) Create stream I0607 21:38:21.230297 6 log.go:172] (0xc00090a000) (0xc00282d4a0) Stream added, broadcasting: 5 I0607 21:38:21.231234 6 log.go:172] (0xc00090a000) Reply frame received for 5 I0607 21:38:21.290766 6 log.go:172] (0xc00090a000) Data frame received for 5 I0607 21:38:21.290803 6 log.go:172] (0xc00282d4a0) (5) Data frame handling I0607 21:38:21.290835 6 log.go:172] (0xc00090a000) Data frame received for 3 I0607 21:38:21.290853 6 log.go:172] (0xc0013c0000) (3) Data frame handling I0607 21:38:21.290868 6 log.go:172] (0xc0013c0000) (3) Data frame sent I0607 21:38:21.290882 6 log.go:172] (0xc00090a000) Data frame received for 3 I0607 21:38:21.290900 6 log.go:172] (0xc0013c0000) (3) Data frame handling I0607 21:38:21.292286 6 log.go:172] (0xc00090a000) Data frame received for 1 I0607 21:38:21.292322 6 log.go:172] (0xc0027b4000) (1) Data frame handling I0607 21:38:21.292350 6 log.go:172] (0xc0027b4000) (1) Data frame sent I0607 21:38:21.292370 6 log.go:172] (0xc00090a000) (0xc0027b4000) Stream removed, broadcasting: 1 I0607 21:38:21.292486 6 log.go:172] (0xc00090a000) (0xc0027b4000) Stream removed, broadcasting: 1 I0607 21:38:21.292503 6 log.go:172] (0xc00090a000) (0xc0013c0000) Stream removed, broadcasting: 3 I0607 21:38:21.292670 6 log.go:172] (0xc00090a000) Go away received I0607 21:38:21.292734 6 log.go:172] (0xc00090a000) (0xc00282d4a0) Stream removed, broadcasting: 5 Jun 7 21:38:21.292: INFO: Exec stderr: "" Jun 7 21:38:21.292: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.292: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.326555 6 log.go:172] (0xc002ae20b0) (0xc0013c0640) Create stream I0607 21:38:21.326586 6 log.go:172] (0xc002ae20b0) (0xc0013c0640) Stream added, broadcasting: 1 I0607 21:38:21.328475 6 log.go:172] (0xc002ae20b0) Reply frame received for 1 I0607 21:38:21.328531 6 log.go:172] (0xc002ae20b0) (0xc0013c0960) Create stream I0607 21:38:21.328557 6 log.go:172] (0xc002ae20b0) (0xc0013c0960) Stream added, broadcasting: 3 I0607 21:38:21.330386 6 log.go:172] (0xc002ae20b0) Reply frame received for 3 I0607 21:38:21.330432 6 log.go:172] (0xc002ae20b0) (0xc002260320) Create stream I0607 21:38:21.330447 6 log.go:172] (0xc002ae20b0) (0xc002260320) Stream added, broadcasting: 5 I0607 21:38:21.331636 6 log.go:172] (0xc002ae20b0) Reply frame received for 5 I0607 21:38:21.385411 6 log.go:172] (0xc002ae20b0) Data frame received for 3 I0607 21:38:21.385435 6 log.go:172] (0xc0013c0960) (3) Data frame handling I0607 21:38:21.385457 6 log.go:172] (0xc0013c0960) (3) Data frame sent I0607 21:38:21.385578 6 log.go:172] (0xc002ae20b0) Data frame received for 5 I0607 21:38:21.385600 6 log.go:172] (0xc002260320) (5) Data frame handling I0607 21:38:21.385816 6 log.go:172] (0xc002ae20b0) Data frame received for 3 I0607 21:38:21.385838 6 log.go:172] (0xc0013c0960) (3) Data frame handling I0607 21:38:21.387894 6 log.go:172] (0xc002ae20b0) Data frame received for 1 I0607 21:38:21.387918 6 log.go:172] (0xc0013c0640) (1) Data frame handling I0607 21:38:21.387935 6 log.go:172] (0xc0013c0640) (1) Data frame sent I0607 21:38:21.388100 6 log.go:172] (0xc002ae20b0) (0xc0013c0640) Stream removed, broadcasting: 1 I0607 21:38:21.388159 6 log.go:172] (0xc002ae20b0) (0xc0013c0640) Stream removed, broadcasting: 1 I0607 21:38:21.388177 6 log.go:172] (0xc002ae20b0) (0xc0013c0960) Stream removed, broadcasting: 3 I0607 21:38:21.388191 6 log.go:172] (0xc002ae20b0) (0xc002260320) Stream removed, broadcasting: 5 Jun 7 21:38:21.388: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 7 21:38:21.388: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.388: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.390460 6 log.go:172] (0xc002ae20b0) Go away received I0607 21:38:21.423090 6 log.go:172] (0xc00152a0b0) (0xc0022605a0) Create stream I0607 21:38:21.423118 6 log.go:172] (0xc00152a0b0) (0xc0022605a0) Stream added, broadcasting: 1 I0607 21:38:21.424648 6 log.go:172] (0xc00152a0b0) Reply frame received for 1 I0607 21:38:21.424691 6 log.go:172] (0xc00152a0b0) (0xc0013c0d20) Create stream I0607 21:38:21.424707 6 log.go:172] (0xc00152a0b0) (0xc0013c0d20) Stream added, broadcasting: 3 I0607 21:38:21.425934 6 log.go:172] (0xc00152a0b0) Reply frame received for 3 I0607 21:38:21.425955 6 log.go:172] (0xc00152a0b0) (0xc0015c4fa0) Create stream I0607 21:38:21.425961 6 log.go:172] (0xc00152a0b0) (0xc0015c4fa0) Stream added, broadcasting: 5 I0607 21:38:21.427030 6 log.go:172] (0xc00152a0b0) Reply frame received for 5 I0607 21:38:21.505515 6 log.go:172] (0xc00152a0b0) Data frame received for 3 I0607 21:38:21.505545 6 log.go:172] (0xc0013c0d20) (3) Data frame handling I0607 21:38:21.505552 6 log.go:172] (0xc0013c0d20) (3) Data frame sent I0607 21:38:21.505556 6 log.go:172] (0xc00152a0b0) Data frame received for 3 I0607 21:38:21.505561 6 log.go:172] (0xc0013c0d20) (3) Data frame handling I0607 21:38:21.505629 6 log.go:172] (0xc00152a0b0) Data frame received for 5 I0607 21:38:21.505657 6 log.go:172] (0xc0015c4fa0) (5) Data frame handling I0607 21:38:21.507071 6 log.go:172] (0xc00152a0b0) Data frame received for 1 I0607 21:38:21.507084 6 log.go:172] (0xc0022605a0) (1) Data frame handling I0607 21:38:21.507091 6 log.go:172] (0xc0022605a0) (1) Data frame sent I0607 21:38:21.507098 6 log.go:172] (0xc00152a0b0) (0xc0022605a0) Stream removed, broadcasting: 1 I0607 21:38:21.507156 6 log.go:172] (0xc00152a0b0) (0xc0022605a0) Stream removed, broadcasting: 1 I0607 21:38:21.507165 6 log.go:172] (0xc00152a0b0) (0xc0013c0d20) Stream removed, broadcasting: 3 I0607 21:38:21.507228 6 log.go:172] (0xc00152a0b0) Go away received I0607 21:38:21.507314 6 log.go:172] (0xc00152a0b0) (0xc0015c4fa0) Stream removed, broadcasting: 5 Jun 7 21:38:21.507: INFO: Exec stderr: "" Jun 7 21:38:21.507: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.507: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.541549 6 log.go:172] (0xc00152a6e0) (0xc0022608c0) Create stream I0607 21:38:21.541573 6 log.go:172] (0xc00152a6e0) (0xc0022608c0) Stream added, broadcasting: 1 I0607 21:38:21.544494 6 log.go:172] (0xc00152a6e0) Reply frame received for 1 I0607 21:38:21.544553 6 log.go:172] (0xc00152a6e0) (0xc002260960) Create stream I0607 21:38:21.544584 6 log.go:172] (0xc00152a6e0) (0xc002260960) Stream added, broadcasting: 3 I0607 21:38:21.545846 6 log.go:172] (0xc00152a6e0) Reply frame received for 3 I0607 21:38:21.545894 6 log.go:172] (0xc00152a6e0) (0xc002260aa0) Create stream I0607 21:38:21.545904 6 log.go:172] (0xc00152a6e0) (0xc002260aa0) Stream added, broadcasting: 5 I0607 21:38:21.546799 6 log.go:172] (0xc00152a6e0) Reply frame received for 5 I0607 21:38:21.598760 6 log.go:172] (0xc00152a6e0) Data frame received for 5 I0607 21:38:21.598814 6 log.go:172] (0xc002260aa0) (5) Data frame handling I0607 21:38:21.598841 6 log.go:172] (0xc00152a6e0) Data frame received for 3 I0607 21:38:21.598858 6 log.go:172] (0xc002260960) (3) Data frame handling I0607 21:38:21.598878 6 log.go:172] (0xc002260960) (3) Data frame sent I0607 21:38:21.598896 6 log.go:172] (0xc00152a6e0) Data frame received for 3 I0607 21:38:21.598909 6 log.go:172] (0xc002260960) (3) Data frame handling I0607 21:38:21.600622 6 log.go:172] (0xc00152a6e0) Data frame received for 1 I0607 21:38:21.600649 6 log.go:172] (0xc0022608c0) (1) Data frame handling I0607 21:38:21.600681 6 log.go:172] (0xc0022608c0) (1) Data frame sent I0607 21:38:21.600764 6 log.go:172] (0xc00152a6e0) (0xc0022608c0) Stream removed, broadcasting: 1 I0607 21:38:21.600880 6 log.go:172] (0xc00152a6e0) (0xc0022608c0) Stream removed, broadcasting: 1 I0607 21:38:21.600918 6 log.go:172] (0xc00152a6e0) Go away received I0607 21:38:21.600962 6 log.go:172] (0xc00152a6e0) (0xc002260960) Stream removed, broadcasting: 3 I0607 21:38:21.601002 6 log.go:172] (0xc00152a6e0) (0xc002260aa0) Stream removed, broadcasting: 5 Jun 7 21:38:21.601: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 7 21:38:21.601: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.601: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.639908 6 log.go:172] (0xc002ae2b00) (0xc0013c1040) Create stream I0607 21:38:21.639934 6 log.go:172] (0xc002ae2b00) (0xc0013c1040) Stream added, broadcasting: 1 I0607 21:38:21.642163 6 log.go:172] (0xc002ae2b00) Reply frame received for 1 I0607 21:38:21.642218 6 log.go:172] (0xc002ae2b00) (0xc0015c5040) Create stream I0607 21:38:21.642241 6 log.go:172] (0xc002ae2b00) (0xc0015c5040) Stream added, broadcasting: 3 I0607 21:38:21.643308 6 log.go:172] (0xc002ae2b00) Reply frame received for 3 I0607 21:38:21.643350 6 log.go:172] (0xc002ae2b00) (0xc002260b40) Create stream I0607 21:38:21.643364 6 log.go:172] (0xc002ae2b00) (0xc002260b40) Stream added, broadcasting: 5 I0607 21:38:21.644371 6 log.go:172] (0xc002ae2b00) Reply frame received for 5 I0607 21:38:21.718717 6 log.go:172] (0xc002ae2b00) Data frame received for 3 I0607 21:38:21.718758 6 log.go:172] (0xc0015c5040) (3) Data frame handling I0607 21:38:21.718781 6 log.go:172] (0xc0015c5040) (3) Data frame sent I0607 21:38:21.718810 6 log.go:172] (0xc002ae2b00) Data frame received for 3 I0607 21:38:21.718832 6 log.go:172] (0xc0015c5040) (3) Data frame handling I0607 21:38:21.718905 6 log.go:172] (0xc002ae2b00) Data frame received for 5 I0607 21:38:21.718940 6 log.go:172] (0xc002260b40) (5) Data frame handling I0607 21:38:21.720726 6 log.go:172] (0xc002ae2b00) Data frame received for 1 I0607 21:38:21.720747 6 log.go:172] (0xc0013c1040) (1) Data frame handling I0607 21:38:21.720759 6 log.go:172] (0xc0013c1040) (1) Data frame sent I0607 21:38:21.720783 6 log.go:172] (0xc002ae2b00) (0xc0013c1040) Stream removed, broadcasting: 1 I0607 21:38:21.720894 6 log.go:172] (0xc002ae2b00) (0xc0013c1040) Stream removed, broadcasting: 1 I0607 21:38:21.720911 6 log.go:172] (0xc002ae2b00) (0xc0015c5040) Stream removed, broadcasting: 3 I0607 21:38:21.720922 6 log.go:172] (0xc002ae2b00) (0xc002260b40) Stream removed, broadcasting: 5 Jun 7 21:38:21.720: INFO: Exec stderr: "" I0607 21:38:21.720949 6 log.go:172] (0xc002ae2b00) Go away received Jun 7 21:38:21.720: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.720: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.755303 6 log.go:172] (0xc002ae3130) (0xc0013c12c0) Create stream I0607 21:38:21.755342 6 log.go:172] (0xc002ae3130) (0xc0013c12c0) Stream added, broadcasting: 1 I0607 21:38:21.757070 6 log.go:172] (0xc002ae3130) Reply frame received for 1 I0607 21:38:21.757465 6 log.go:172] (0xc002ae3130) (0xc0013c1400) Create stream I0607 21:38:21.757485 6 log.go:172] (0xc002ae3130) (0xc0013c1400) Stream added, broadcasting: 3 I0607 21:38:21.758486 6 log.go:172] (0xc002ae3130) Reply frame received for 3 I0607 21:38:21.758524 6 log.go:172] (0xc002ae3130) (0xc002260c80) Create stream I0607 21:38:21.758539 6 log.go:172] (0xc002ae3130) (0xc002260c80) Stream added, broadcasting: 5 I0607 21:38:21.759534 6 log.go:172] (0xc002ae3130) Reply frame received for 5 I0607 21:38:21.823132 6 log.go:172] (0xc002ae3130) Data frame received for 5 I0607 21:38:21.823166 6 log.go:172] (0xc002260c80) (5) Data frame handling I0607 21:38:21.823186 6 log.go:172] (0xc002ae3130) Data frame received for 3 I0607 21:38:21.823322 6 log.go:172] (0xc0013c1400) (3) Data frame handling I0607 21:38:21.823336 6 log.go:172] (0xc0013c1400) (3) Data frame sent I0607 21:38:21.823357 6 log.go:172] (0xc002ae3130) Data frame received for 3 I0607 21:38:21.823374 6 log.go:172] (0xc0013c1400) (3) Data frame handling I0607 21:38:21.824874 6 log.go:172] (0xc002ae3130) Data frame received for 1 I0607 21:38:21.824908 6 log.go:172] (0xc0013c12c0) (1) Data frame handling I0607 21:38:21.824952 6 log.go:172] (0xc0013c12c0) (1) Data frame sent I0607 21:38:21.825017 6 log.go:172] (0xc002ae3130) (0xc0013c12c0) Stream removed, broadcasting: 1 I0607 21:38:21.825070 6 log.go:172] (0xc002ae3130) Go away received I0607 21:38:21.825346 6 log.go:172] (0xc002ae3130) (0xc0013c12c0) Stream removed, broadcasting: 1 I0607 21:38:21.825370 6 log.go:172] (0xc002ae3130) (0xc0013c1400) Stream removed, broadcasting: 3 I0607 21:38:21.825381 6 log.go:172] (0xc002ae3130) (0xc002260c80) Stream removed, broadcasting: 5 Jun 7 21:38:21.825: INFO: Exec stderr: "" Jun 7 21:38:21.825: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.825: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.857781 6 log.go:172] (0xc00152ad10) (0xc002261540) Create stream I0607 21:38:21.857805 6 log.go:172] (0xc00152ad10) (0xc002261540) Stream added, broadcasting: 1 I0607 21:38:21.859925 6 log.go:172] (0xc00152ad10) Reply frame received for 1 I0607 21:38:21.859986 6 log.go:172] (0xc00152ad10) (0xc0027b40a0) Create stream I0607 21:38:21.860006 6 log.go:172] (0xc00152ad10) (0xc0027b40a0) Stream added, broadcasting: 3 I0607 21:38:21.860973 6 log.go:172] (0xc00152ad10) Reply frame received for 3 I0607 21:38:21.861021 6 log.go:172] (0xc00152ad10) (0xc0027b4140) Create stream I0607 21:38:21.861034 6 log.go:172] (0xc00152ad10) (0xc0027b4140) Stream added, broadcasting: 5 I0607 21:38:21.862190 6 log.go:172] (0xc00152ad10) Reply frame received for 5 I0607 21:38:21.909097 6 log.go:172] (0xc00152ad10) Data frame received for 3 I0607 21:38:21.909340 6 log.go:172] (0xc0027b40a0) (3) Data frame handling I0607 21:38:21.909359 6 log.go:172] (0xc0027b40a0) (3) Data frame sent I0607 21:38:21.909368 6 log.go:172] (0xc00152ad10) Data frame received for 3 I0607 21:38:21.909375 6 log.go:172] (0xc0027b40a0) (3) Data frame handling I0607 21:38:21.909413 6 log.go:172] (0xc00152ad10) Data frame received for 5 I0607 21:38:21.909432 6 log.go:172] (0xc0027b4140) (5) Data frame handling I0607 21:38:21.910763 6 log.go:172] (0xc00152ad10) Data frame received for 1 I0607 21:38:21.910796 6 log.go:172] (0xc002261540) (1) Data frame handling I0607 21:38:21.910824 6 log.go:172] (0xc002261540) (1) Data frame sent I0607 21:38:21.910974 6 log.go:172] (0xc00152ad10) (0xc002261540) Stream removed, broadcasting: 1 I0607 21:38:21.910997 6 log.go:172] (0xc00152ad10) Go away received I0607 21:38:21.911064 6 log.go:172] (0xc00152ad10) (0xc002261540) Stream removed, broadcasting: 1 I0607 21:38:21.911092 6 log.go:172] (0xc00152ad10) (0xc0027b40a0) Stream removed, broadcasting: 3 I0607 21:38:21.911105 6 log.go:172] (0xc00152ad10) (0xc0027b4140) Stream removed, broadcasting: 5 Jun 7 21:38:21.911: INFO: Exec stderr: "" Jun 7 21:38:21.911: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1044 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:38:21.911: INFO: >>> kubeConfig: /root/.kube/config I0607 21:38:21.942975 6 log.go:172] (0xc002bd6840) (0xc00282d7c0) Create stream I0607 21:38:21.943009 6 log.go:172] (0xc002bd6840) (0xc00282d7c0) Stream added, broadcasting: 1 I0607 21:38:21.945312 6 log.go:172] (0xc002bd6840) Reply frame received for 1 I0607 21:38:21.945375 6 log.go:172] (0xc002bd6840) (0xc0027b4280) Create stream I0607 21:38:21.945398 6 log.go:172] (0xc002bd6840) (0xc0027b4280) Stream added, broadcasting: 3 I0607 21:38:21.946207 6 log.go:172] (0xc002bd6840) Reply frame received for 3 I0607 21:38:21.946237 6 log.go:172] (0xc002bd6840) (0xc00282dc20) Create stream I0607 21:38:21.946245 6 log.go:172] (0xc002bd6840) (0xc00282dc20) Stream added, broadcasting: 5 I0607 21:38:21.946974 6 log.go:172] (0xc002bd6840) Reply frame received for 5 I0607 21:38:22.020113 6 log.go:172] (0xc002bd6840) Data frame received for 3 I0607 21:38:22.020144 6 log.go:172] (0xc0027b4280) (3) Data frame handling I0607 21:38:22.020165 6 log.go:172] (0xc0027b4280) (3) Data frame sent I0607 21:38:22.020185 6 log.go:172] (0xc002bd6840) Data frame received for 3 I0607 21:38:22.020204 6 log.go:172] (0xc0027b4280) (3) Data frame handling I0607 21:38:22.020664 6 log.go:172] (0xc002bd6840) Data frame received for 5 I0607 21:38:22.020694 6 log.go:172] (0xc00282dc20) (5) Data frame handling I0607 21:38:22.022430 6 log.go:172] (0xc002bd6840) Data frame received for 1 I0607 21:38:22.022450 6 log.go:172] (0xc00282d7c0) (1) Data frame handling I0607 21:38:22.022463 6 log.go:172] (0xc00282d7c0) (1) Data frame sent I0607 21:38:22.022473 6 log.go:172] (0xc002bd6840) (0xc00282d7c0) Stream removed, broadcasting: 1 I0607 21:38:22.022554 6 log.go:172] (0xc002bd6840) Go away received I0607 21:38:22.022603 6 log.go:172] (0xc002bd6840) (0xc00282d7c0) Stream removed, broadcasting: 1 I0607 21:38:22.022651 6 log.go:172] (0xc002bd6840) (0xc0027b4280) Stream removed, broadcasting: 3 I0607 21:38:22.022720 6 log.go:172] (0xc002bd6840) (0xc00282dc20) Stream removed, broadcasting: 5 Jun 7 21:38:22.022: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:38:22.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1044" for this suite. • [SLOW TEST:13.220 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1916,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:38:22.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:38:22.087: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d126261f-8f00-4fbe-809c-149165014f88" in namespace "security-context-test-9312" to be "success or failure" Jun 7 21:38:22.090: INFO: Pod "busybox-readonly-false-d126261f-8f00-4fbe-809c-149165014f88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.093193ms Jun 7 21:38:24.095: INFO: Pod "busybox-readonly-false-d126261f-8f00-4fbe-809c-149165014f88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007197891s Jun 7 21:38:26.149: INFO: Pod "busybox-readonly-false-d126261f-8f00-4fbe-809c-149165014f88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06176076s Jun 7 21:38:26.149: INFO: Pod "busybox-readonly-false-d126261f-8f00-4fbe-809c-149165014f88" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:38:26.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9312" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:38:26.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:38:26.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04" in namespace "downward-api-9267" to be "success or failure" Jun 7 21:38:26.235: INFO: Pod "downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 9.68832ms Jun 7 21:38:28.279: INFO: Pod "downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053980061s Jun 7 21:38:30.283: INFO: Pod "downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058077288s STEP: Saw pod success Jun 7 21:38:30.283: INFO: Pod "downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04" satisfied condition "success or failure" Jun 7 21:38:30.286: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04 container client-container: STEP: delete the pod Jun 7 21:38:30.317: INFO: Waiting for pod downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04 to disappear Jun 7 21:38:30.320: INFO: Pod downwardapi-volume-0635d1cc-cda2-4133-8aa2-f2dc7f3dfb04 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:38:30.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9267" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1941,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:38:30.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:38:30.418: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 7 21:38:30.453: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:30.496: INFO: Number of nodes with available pods: 0 Jun 7 21:38:30.496: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:38:31.502: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:31.504: INFO: Number of nodes with available pods: 0 Jun 7 21:38:31.504: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:38:32.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:32.503: INFO: Number of nodes with available pods: 0 Jun 7 21:38:32.503: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:38:33.557: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:33.561: INFO: Number of nodes with available pods: 0 Jun 7 21:38:33.561: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:38:34.502: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:34.506: INFO: Number of nodes with available pods: 1 Jun 7 21:38:34.506: INFO: Node jerma-worker2 is running more than one daemon pod Jun 7 21:38:35.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:35.503: INFO: Number of nodes with available pods: 2 Jun 7 21:38:35.503: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 7 21:38:35.603: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:35.603: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:35.607: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:36.652: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:36.652: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:36.657: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:37.610: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:37.610: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:37.614: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:38.612: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:38.612: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:38.612: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:38.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:39.612: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:39.612: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:39.612: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:39.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:40.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:40.611: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:40.611: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:40.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:41.612: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:41.612: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:41.612: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:41.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:42.612: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:42.612: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:42.612: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:42.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:43.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:43.611: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:43.611: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:43.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:44.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:44.611: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:44.611: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:44.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:45.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:45.611: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:45.611: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:45.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:46.616: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:46.616: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:46.616: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:46.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:47.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:47.611: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:47.611: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:47.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:48.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:48.611: INFO: Wrong image for pod: daemon-set-dvhtj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:48.611: INFO: Pod daemon-set-dvhtj is not available Jun 7 21:38:48.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:49.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:49.611: INFO: Pod daemon-set-shq5v is not available Jun 7 21:38:49.614: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:50.635: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:50.635: INFO: Pod daemon-set-shq5v is not available Jun 7 21:38:50.639: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:51.612: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:51.612: INFO: Pod daemon-set-shq5v is not available Jun 7 21:38:51.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:52.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:52.611: INFO: Pod daemon-set-shq5v is not available Jun 7 21:38:52.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:53.665: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:53.670: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:54.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:54.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:55.611: INFO: Wrong image for pod: daemon-set-5nvnr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jun 7 21:38:55.611: INFO: Pod daemon-set-5nvnr is not available Jun 7 21:38:55.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:56.612: INFO: Pod daemon-set-nvsj5 is not available Jun 7 21:38:56.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 7 21:38:56.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:56.622: INFO: Number of nodes with available pods: 1 Jun 7 21:38:56.622: INFO: Node jerma-worker2 is running more than one daemon pod Jun 7 21:38:57.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:57.629: INFO: Number of nodes with available pods: 1 Jun 7 21:38:57.629: INFO: Node jerma-worker2 is running more than one daemon pod Jun 7 21:38:58.627: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:58.630: INFO: Number of nodes with available pods: 1 Jun 7 21:38:58.630: INFO: Node jerma-worker2 is running more than one daemon pod Jun 7 21:38:59.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:38:59.628: INFO: Number of nodes with available pods: 1 Jun 7 21:38:59.628: INFO: Node jerma-worker2 is running more than one daemon pod Jun 7 21:39:00.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:39:00.629: INFO: Number of nodes with available pods: 2 Jun 7 21:39:00.629: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8575, will wait for the garbage collector to delete the pods Jun 7 21:39:00.698: INFO: Deleting DaemonSet.extensions daemon-set took: 5.98635ms Jun 7 21:39:00.998: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.264104ms Jun 7 21:39:09.506: INFO: Number of nodes with available pods: 0 Jun 7 21:39:09.506: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 21:39:09.508: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8575/daemonsets","resourceVersion":"22532449"},"items":null} Jun 7 21:39:09.511: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8575/pods","resourceVersion":"22532449"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:39:09.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8575" for this suite. • [SLOW TEST:39.218 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":126,"skipped":1946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:39:09.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4344 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 21:39:09.588: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 21:39:33.738: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.179:8080/dial?request=hostname&protocol=http&host=10.244.1.24&port=8080&tries=1'] Namespace:pod-network-test-4344 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:39:33.738: INFO: >>> kubeConfig: /root/.kube/config I0607 21:39:33.774532 6 log.go:172] (0xc002bd7760) (0xc0023b1400) Create stream I0607 21:39:33.774560 6 log.go:172] (0xc002bd7760) (0xc0023b1400) Stream added, broadcasting: 1 I0607 21:39:33.776425 6 log.go:172] (0xc002bd7760) Reply frame received for 1 I0607 21:39:33.776470 6 log.go:172] (0xc002bd7760) (0xc0026d4d20) Create stream I0607 21:39:33.776486 6 log.go:172] (0xc002bd7760) (0xc0026d4d20) Stream added, broadcasting: 3 I0607 21:39:33.777650 6 log.go:172] (0xc002bd7760) Reply frame received for 3 I0607 21:39:33.777706 6 log.go:172] (0xc002bd7760) (0xc00274b2c0) Create stream I0607 21:39:33.777731 6 log.go:172] (0xc002bd7760) (0xc00274b2c0) Stream added, broadcasting: 5 I0607 21:39:33.778731 6 log.go:172] (0xc002bd7760) Reply frame received for 5 I0607 21:39:33.862648 6 log.go:172] (0xc002bd7760) Data frame received for 3 I0607 21:39:33.862685 6 log.go:172] (0xc0026d4d20) (3) Data frame handling I0607 21:39:33.862702 6 log.go:172] (0xc0026d4d20) (3) Data frame sent I0607 21:39:33.863374 6 log.go:172] (0xc002bd7760) Data frame received for 3 I0607 21:39:33.863409 6 log.go:172] (0xc0026d4d20) (3) Data frame handling I0607 21:39:33.863439 6 log.go:172] (0xc002bd7760) Data frame received for 5 I0607 21:39:33.863460 6 log.go:172] (0xc00274b2c0) (5) Data frame handling I0607 21:39:33.865633 6 log.go:172] (0xc002bd7760) Data frame received for 1 I0607 21:39:33.865648 6 log.go:172] (0xc0023b1400) (1) Data frame handling I0607 21:39:33.865655 6 log.go:172] (0xc0023b1400) (1) Data frame sent I0607 21:39:33.865666 6 log.go:172] (0xc002bd7760) (0xc0023b1400) Stream removed, broadcasting: 1 I0607 21:39:33.865739 6 log.go:172] (0xc002bd7760) Go away received I0607 21:39:33.865768 6 log.go:172] (0xc002bd7760) (0xc0023b1400) Stream removed, broadcasting: 1 I0607 21:39:33.865787 6 log.go:172] (0xc002bd7760) (0xc0026d4d20) Stream removed, broadcasting: 3 I0607 21:39:33.865801 6 log.go:172] (0xc002bd7760) (0xc00274b2c0) Stream removed, broadcasting: 5 Jun 7 21:39:33.865: INFO: Waiting for responses: map[] Jun 7 21:39:33.887: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.179:8080/dial?request=hostname&protocol=http&host=10.244.2.178&port=8080&tries=1'] Namespace:pod-network-test-4344 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:39:33.887: INFO: >>> kubeConfig: /root/.kube/config I0607 21:39:33.938570 6 log.go:172] (0xc002ae3810) (0xc0026d5040) Create stream I0607 21:39:33.938602 6 log.go:172] (0xc002ae3810) (0xc0026d5040) Stream added, broadcasting: 1 I0607 21:39:33.946576 6 log.go:172] (0xc002ae3810) Reply frame received for 1 I0607 21:39:33.946605 6 log.go:172] (0xc002ae3810) (0xc0015c4320) Create stream I0607 21:39:33.946612 6 log.go:172] (0xc002ae3810) (0xc0015c4320) Stream added, broadcasting: 3 I0607 21:39:33.947621 6 log.go:172] (0xc002ae3810) Reply frame received for 3 I0607 21:39:33.947646 6 log.go:172] (0xc002ae3810) (0xc0015c43c0) Create stream I0607 21:39:33.947655 6 log.go:172] (0xc002ae3810) (0xc0015c43c0) Stream added, broadcasting: 5 I0607 21:39:33.948426 6 log.go:172] (0xc002ae3810) Reply frame received for 5 I0607 21:39:34.021888 6 log.go:172] (0xc002ae3810) Data frame received for 3 I0607 21:39:34.021974 6 log.go:172] (0xc0015c4320) (3) Data frame handling I0607 21:39:34.022033 6 log.go:172] (0xc0015c4320) (3) Data frame sent I0607 21:39:34.022567 6 log.go:172] (0xc002ae3810) Data frame received for 5 I0607 21:39:34.022590 6 log.go:172] (0xc0015c43c0) (5) Data frame handling I0607 21:39:34.022616 6 log.go:172] (0xc002ae3810) Data frame received for 3 I0607 21:39:34.022638 6 log.go:172] (0xc0015c4320) (3) Data frame handling I0607 21:39:34.024817 6 log.go:172] (0xc002ae3810) Data frame received for 1 I0607 21:39:34.024869 6 log.go:172] (0xc0026d5040) (1) Data frame handling I0607 21:39:34.024900 6 log.go:172] (0xc0026d5040) (1) Data frame sent I0607 21:39:34.024924 6 log.go:172] (0xc002ae3810) (0xc0026d5040) Stream removed, broadcasting: 1 I0607 21:39:34.024973 6 log.go:172] (0xc002ae3810) Go away received I0607 21:39:34.025081 6 log.go:172] (0xc002ae3810) (0xc0026d5040) Stream removed, broadcasting: 1 I0607 21:39:34.025099 6 log.go:172] (0xc002ae3810) (0xc0015c4320) Stream removed, broadcasting: 3 I0607 21:39:34.025271 6 log.go:172] (0xc002ae3810) (0xc0015c43c0) Stream removed, broadcasting: 5 Jun 7 21:39:34.025: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:39:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4344" for this suite. • [SLOW TEST:24.482 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2002,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:39:34.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jun 7 21:39:34.066: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:39:48.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-270" for this suite. • [SLOW TEST:14.510 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":128,"skipped":2002,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:39:48.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:39:48.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc" in namespace "projected-4598" to be "success or failure" Jun 7 21:39:48.626: INFO: Pod "downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.604521ms Jun 7 21:39:50.630: INFO: Pod "downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007742978s Jun 7 21:39:52.634: INFO: Pod "downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012160756s STEP: Saw pod success Jun 7 21:39:52.634: INFO: Pod "downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc" satisfied condition "success or failure" Jun 7 21:39:52.637: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc container client-container: STEP: delete the pod Jun 7 21:39:52.658: INFO: Waiting for pod downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc to disappear Jun 7 21:39:52.662: INFO: Pod downwardapi-volume-9bb8b7fb-95cb-48f0-af43-4f8a153d46dc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:39:52.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4598" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2015,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:39:52.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:39:58.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5341" for this suite. STEP: Destroying namespace "nsdeletetest-6469" for this suite. Jun 7 21:39:58.938: INFO: Namespace nsdeletetest-6469 was already deleted STEP: Destroying namespace "nsdeletetest-3385" for this suite. • [SLOW TEST:6.273 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":130,"skipped":2030,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:39:58.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2873 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-2873 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2873 Jun 7 21:39:59.018: INFO: Found 0 stateful pods, waiting for 1 Jun 7 21:40:09.023: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 7 21:40:09.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:40:09.294: INFO: stderr: "I0607 21:40:09.175352 2428 log.go:172] (0xc0006851e0) (0xc0005fbf40) Create stream\nI0607 21:40:09.175398 2428 log.go:172] (0xc0006851e0) (0xc0005fbf40) Stream added, broadcasting: 1\nI0607 21:40:09.177586 2428 log.go:172] (0xc0006851e0) Reply frame received for 1\nI0607 21:40:09.177619 2428 log.go:172] (0xc0006851e0) (0xc000532820) Create stream\nI0607 21:40:09.177629 2428 log.go:172] (0xc0006851e0) (0xc000532820) Stream added, broadcasting: 3\nI0607 21:40:09.178463 2428 log.go:172] (0xc0006851e0) Reply frame received for 3\nI0607 21:40:09.178506 2428 log.go:172] (0xc0006851e0) (0xc0001495e0) Create stream\nI0607 21:40:09.178518 2428 log.go:172] (0xc0006851e0) (0xc0001495e0) Stream added, broadcasting: 5\nI0607 21:40:09.179343 2428 log.go:172] (0xc0006851e0) Reply frame received for 5\nI0607 21:40:09.253515 2428 log.go:172] (0xc0006851e0) Data frame received for 5\nI0607 21:40:09.253537 2428 log.go:172] (0xc0001495e0) (5) Data frame handling\nI0607 21:40:09.253547 2428 log.go:172] (0xc0001495e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:40:09.284595 2428 log.go:172] (0xc0006851e0) Data frame received for 3\nI0607 21:40:09.284648 2428 log.go:172] (0xc000532820) (3) Data frame handling\nI0607 21:40:09.284667 2428 log.go:172] (0xc000532820) (3) Data frame sent\nI0607 21:40:09.285042 2428 log.go:172] (0xc0006851e0) Data frame received for 5\nI0607 21:40:09.285072 2428 log.go:172] (0xc0001495e0) (5) Data frame handling\nI0607 21:40:09.285419 2428 log.go:172] (0xc0006851e0) Data frame received for 3\nI0607 21:40:09.285440 2428 log.go:172] (0xc000532820) (3) Data frame handling\nI0607 21:40:09.287719 2428 log.go:172] (0xc0006851e0) Data frame received for 1\nI0607 21:40:09.287735 2428 log.go:172] (0xc0005fbf40) (1) Data frame handling\nI0607 21:40:09.287753 2428 log.go:172] (0xc0005fbf40) (1) Data frame sent\nI0607 21:40:09.287765 2428 log.go:172] (0xc0006851e0) (0xc0005fbf40) Stream removed, broadcasting: 1\nI0607 21:40:09.287998 2428 log.go:172] (0xc0006851e0) (0xc0005fbf40) Stream removed, broadcasting: 1\nI0607 21:40:09.288015 2428 log.go:172] (0xc0006851e0) (0xc000532820) Stream removed, broadcasting: 3\nI0607 21:40:09.288148 2428 log.go:172] (0xc0006851e0) Go away received\nI0607 21:40:09.288179 2428 log.go:172] (0xc0006851e0) (0xc0001495e0) Stream removed, broadcasting: 5\n" Jun 7 21:40:09.294: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:40:09.294: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:40:09.298: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 7 21:40:19.303: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:40:19.303: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:40:19.317: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:19.317: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:19.317: INFO: Jun 7 21:40:19.317: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 7 21:40:20.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994483487s Jun 7 21:40:21.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989390416s Jun 7 21:40:22.678: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.749238242s Jun 7 21:40:23.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.633341s Jun 7 21:40:24.726: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.609357742s Jun 7 21:40:25.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.585927831s Jun 7 21:40:26.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.581545718s Jun 7 21:40:27.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.576631352s Jun 7 21:40:28.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 571.560804ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2873 Jun 7 21:40:29.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:40:29.954: INFO: stderr: "I0607 21:40:29.873618 2451 log.go:172] (0xc00099e000) (0xc000a7e000) Create stream\nI0607 21:40:29.873691 2451 log.go:172] (0xc00099e000) (0xc000a7e000) Stream added, broadcasting: 1\nI0607 21:40:29.876629 2451 log.go:172] (0xc00099e000) Reply frame received for 1\nI0607 21:40:29.876680 2451 log.go:172] (0xc00099e000) (0xc0009ee3c0) Create stream\nI0607 21:40:29.876696 2451 log.go:172] (0xc00099e000) (0xc0009ee3c0) Stream added, broadcasting: 3\nI0607 21:40:29.877725 2451 log.go:172] (0xc00099e000) Reply frame received for 3\nI0607 21:40:29.878271 2451 log.go:172] (0xc00099e000) (0xc000954000) Create stream\nI0607 21:40:29.878302 2451 log.go:172] (0xc00099e000) (0xc000954000) Stream added, broadcasting: 5\nI0607 21:40:29.879147 2451 log.go:172] (0xc00099e000) Reply frame received for 5\nI0607 21:40:29.946041 2451 log.go:172] (0xc00099e000) Data frame received for 5\nI0607 21:40:29.946084 2451 log.go:172] (0xc000954000) (5) Data frame handling\nI0607 21:40:29.946099 2451 log.go:172] (0xc000954000) (5) Data frame sent\nI0607 21:40:29.946107 2451 log.go:172] (0xc00099e000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0607 21:40:29.946116 2451 log.go:172] (0xc000954000) (5) Data frame handling\nI0607 21:40:29.946166 2451 log.go:172] (0xc00099e000) Data frame received for 3\nI0607 21:40:29.946186 2451 log.go:172] (0xc0009ee3c0) (3) Data frame handling\nI0607 21:40:29.946210 2451 log.go:172] (0xc0009ee3c0) (3) Data frame sent\nI0607 21:40:29.946223 2451 log.go:172] (0xc00099e000) Data frame received for 3\nI0607 21:40:29.946233 2451 log.go:172] (0xc0009ee3c0) (3) Data frame handling\nI0607 21:40:29.947460 2451 log.go:172] (0xc00099e000) Data frame received for 1\nI0607 21:40:29.947491 2451 log.go:172] (0xc000a7e000) (1) Data frame handling\nI0607 21:40:29.947527 2451 log.go:172] (0xc000a7e000) (1) Data frame sent\nI0607 21:40:29.947557 2451 log.go:172] (0xc00099e000) (0xc000a7e000) Stream removed, broadcasting: 1\nI0607 21:40:29.947738 2451 log.go:172] (0xc00099e000) Go away received\nI0607 21:40:29.947972 2451 log.go:172] (0xc00099e000) (0xc000a7e000) Stream removed, broadcasting: 1\nI0607 21:40:29.948005 2451 log.go:172] (0xc00099e000) (0xc0009ee3c0) Stream removed, broadcasting: 3\nI0607 21:40:29.948027 2451 log.go:172] (0xc00099e000) (0xc000954000) Stream removed, broadcasting: 5\n" Jun 7 21:40:29.955: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:40:29.955: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:40:29.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:40:30.173: INFO: stderr: "I0607 21:40:30.092098 2471 log.go:172] (0xc00011b600) (0xc00050e1e0) Create stream\nI0607 21:40:30.092152 2471 log.go:172] (0xc00011b600) (0xc00050e1e0) Stream added, broadcasting: 1\nI0607 21:40:30.094668 2471 log.go:172] (0xc00011b600) Reply frame received for 1\nI0607 21:40:30.094707 2471 log.go:172] (0xc00011b600) (0xc00067e000) Create stream\nI0607 21:40:30.094728 2471 log.go:172] (0xc00011b600) (0xc00067e000) Stream added, broadcasting: 3\nI0607 21:40:30.095640 2471 log.go:172] (0xc00011b600) Reply frame received for 3\nI0607 21:40:30.095714 2471 log.go:172] (0xc00011b600) (0xc00050e280) Create stream\nI0607 21:40:30.095743 2471 log.go:172] (0xc00011b600) (0xc00050e280) Stream added, broadcasting: 5\nI0607 21:40:30.096696 2471 log.go:172] (0xc00011b600) Reply frame received for 5\nI0607 21:40:30.165850 2471 log.go:172] (0xc00011b600) Data frame received for 5\nI0607 21:40:30.165896 2471 log.go:172] (0xc00050e280) (5) Data frame handling\nI0607 21:40:30.165912 2471 log.go:172] (0xc00050e280) (5) Data frame sent\nI0607 21:40:30.165921 2471 log.go:172] (0xc00011b600) Data frame received for 5\nI0607 21:40:30.165931 2471 log.go:172] (0xc00050e280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0607 21:40:30.165955 2471 log.go:172] (0xc00011b600) Data frame received for 3\nI0607 21:40:30.165966 2471 log.go:172] (0xc00067e000) (3) Data frame handling\nI0607 21:40:30.165981 2471 log.go:172] (0xc00067e000) (3) Data frame sent\nI0607 21:40:30.165989 2471 log.go:172] (0xc00011b600) Data frame received for 3\nI0607 21:40:30.165996 2471 log.go:172] (0xc00067e000) (3) Data frame handling\nI0607 21:40:30.167751 2471 log.go:172] (0xc00011b600) Data frame received for 1\nI0607 21:40:30.167768 2471 log.go:172] (0xc00050e1e0) (1) Data frame handling\nI0607 21:40:30.167784 2471 log.go:172] (0xc00050e1e0) (1) Data frame sent\nI0607 21:40:30.167802 2471 log.go:172] (0xc00011b600) (0xc00050e1e0) Stream removed, broadcasting: 1\nI0607 21:40:30.167823 2471 log.go:172] (0xc00011b600) Go away received\nI0607 21:40:30.168226 2471 log.go:172] (0xc00011b600) (0xc00050e1e0) Stream removed, broadcasting: 1\nI0607 21:40:30.168256 2471 log.go:172] (0xc00011b600) (0xc00067e000) Stream removed, broadcasting: 3\nI0607 21:40:30.168269 2471 log.go:172] (0xc00011b600) (0xc00050e280) Stream removed, broadcasting: 5\n" Jun 7 21:40:30.174: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:40:30.174: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:40:30.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 7 21:40:30.363: INFO: stderr: "I0607 21:40:30.296246 2493 log.go:172] (0xc000594dc0) (0xc00063bcc0) Create stream\nI0607 21:40:30.296306 2493 log.go:172] (0xc000594dc0) (0xc00063bcc0) Stream added, broadcasting: 1\nI0607 21:40:30.298520 2493 log.go:172] (0xc000594dc0) Reply frame received for 1\nI0607 21:40:30.298550 2493 log.go:172] (0xc000594dc0) (0xc000912000) Create stream\nI0607 21:40:30.298562 2493 log.go:172] (0xc000594dc0) (0xc000912000) Stream added, broadcasting: 3\nI0607 21:40:30.299463 2493 log.go:172] (0xc000594dc0) Reply frame received for 3\nI0607 21:40:30.299619 2493 log.go:172] (0xc000594dc0) (0xc00063bd60) Create stream\nI0607 21:40:30.299644 2493 log.go:172] (0xc000594dc0) (0xc00063bd60) Stream added, broadcasting: 5\nI0607 21:40:30.300904 2493 log.go:172] (0xc000594dc0) Reply frame received for 5\nI0607 21:40:30.355127 2493 log.go:172] (0xc000594dc0) Data frame received for 5\nI0607 21:40:30.355162 2493 log.go:172] (0xc00063bd60) (5) Data frame handling\nI0607 21:40:30.355173 2493 log.go:172] (0xc00063bd60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0607 21:40:30.355210 2493 log.go:172] (0xc000594dc0) Data frame received for 3\nI0607 21:40:30.355270 2493 log.go:172] (0xc000912000) (3) Data frame handling\nI0607 21:40:30.355304 2493 log.go:172] (0xc000912000) (3) Data frame sent\nI0607 21:40:30.355329 2493 log.go:172] (0xc000594dc0) Data frame received for 3\nI0607 21:40:30.355356 2493 log.go:172] (0xc000594dc0) Data frame received for 5\nI0607 21:40:30.355383 2493 log.go:172] (0xc00063bd60) (5) Data frame handling\nI0607 21:40:30.355413 2493 log.go:172] (0xc000912000) (3) Data frame handling\nI0607 21:40:30.357376 2493 log.go:172] (0xc000594dc0) Data frame received for 1\nI0607 21:40:30.357409 2493 log.go:172] (0xc00063bcc0) (1) Data frame handling\nI0607 21:40:30.357437 2493 log.go:172] (0xc00063bcc0) (1) Data frame sent\nI0607 21:40:30.357455 2493 log.go:172] (0xc000594dc0) (0xc00063bcc0) Stream removed, broadcasting: 1\nI0607 21:40:30.357475 2493 log.go:172] (0xc000594dc0) Go away received\nI0607 21:40:30.357897 2493 log.go:172] (0xc000594dc0) (0xc00063bcc0) Stream removed, broadcasting: 1\nI0607 21:40:30.357917 2493 log.go:172] (0xc000594dc0) (0xc000912000) Stream removed, broadcasting: 3\nI0607 21:40:30.357925 2493 log.go:172] (0xc000594dc0) (0xc00063bd60) Stream removed, broadcasting: 5\n" Jun 7 21:40:30.363: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 7 21:40:30.363: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 7 21:40:30.367: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 7 21:40:40.372: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:40:40.372: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 7 21:40:40.372: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 7 21:40:40.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:40:40.593: INFO: stderr: "I0607 21:40:40.504983 2513 log.go:172] (0xc0008c8000) (0xc000948000) Create stream\nI0607 21:40:40.505062 2513 log.go:172] (0xc0008c8000) (0xc000948000) Stream added, broadcasting: 1\nI0607 21:40:40.508474 2513 log.go:172] (0xc0008c8000) Reply frame received for 1\nI0607 21:40:40.508521 2513 log.go:172] (0xc0008c8000) (0xc0008ac000) Create stream\nI0607 21:40:40.508533 2513 log.go:172] (0xc0008c8000) (0xc0008ac000) Stream added, broadcasting: 3\nI0607 21:40:40.509791 2513 log.go:172] (0xc0008c8000) Reply frame received for 3\nI0607 21:40:40.509848 2513 log.go:172] (0xc0008c8000) (0xc0008ac140) Create stream\nI0607 21:40:40.509867 2513 log.go:172] (0xc0008c8000) (0xc0008ac140) Stream added, broadcasting: 5\nI0607 21:40:40.511168 2513 log.go:172] (0xc0008c8000) Reply frame received for 5\nI0607 21:40:40.584177 2513 log.go:172] (0xc0008c8000) Data frame received for 5\nI0607 21:40:40.584228 2513 log.go:172] (0xc0008ac140) (5) Data frame handling\nI0607 21:40:40.584262 2513 log.go:172] (0xc0008ac140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:40:40.584312 2513 log.go:172] (0xc0008c8000) Data frame received for 3\nI0607 21:40:40.584369 2513 log.go:172] (0xc0008c8000) Data frame received for 5\nI0607 21:40:40.584404 2513 log.go:172] (0xc0008ac140) (5) Data frame handling\nI0607 21:40:40.584445 2513 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0607 21:40:40.584497 2513 log.go:172] (0xc0008ac000) (3) Data frame sent\nI0607 21:40:40.584525 2513 log.go:172] (0xc0008c8000) Data frame received for 3\nI0607 21:40:40.584544 2513 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0607 21:40:40.586727 2513 log.go:172] (0xc0008c8000) Data frame received for 1\nI0607 21:40:40.586755 2513 log.go:172] (0xc000948000) (1) Data frame handling\nI0607 21:40:40.586781 2513 log.go:172] (0xc000948000) (1) Data frame sent\nI0607 21:40:40.586797 2513 log.go:172] (0xc0008c8000) (0xc000948000) Stream removed, broadcasting: 1\nI0607 21:40:40.587145 2513 log.go:172] (0xc0008c8000) Go away received\nI0607 21:40:40.587199 2513 log.go:172] (0xc0008c8000) (0xc000948000) Stream removed, broadcasting: 1\nI0607 21:40:40.587235 2513 log.go:172] (0xc0008c8000) (0xc0008ac000) Stream removed, broadcasting: 3\nI0607 21:40:40.587256 2513 log.go:172] (0xc0008c8000) (0xc0008ac140) Stream removed, broadcasting: 5\n" Jun 7 21:40:40.593: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:40:40.594: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:40:40.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:40:40.834: INFO: stderr: "I0607 21:40:40.721366 2533 log.go:172] (0xc000b74f20) (0xc000b92460) Create stream\nI0607 21:40:40.721426 2533 log.go:172] (0xc000b74f20) (0xc000b92460) Stream added, broadcasting: 1\nI0607 21:40:40.723836 2533 log.go:172] (0xc000b74f20) Reply frame received for 1\nI0607 21:40:40.723893 2533 log.go:172] (0xc000b74f20) (0xc000b6a1e0) Create stream\nI0607 21:40:40.723908 2533 log.go:172] (0xc000b74f20) (0xc000b6a1e0) Stream added, broadcasting: 3\nI0607 21:40:40.724892 2533 log.go:172] (0xc000b74f20) Reply frame received for 3\nI0607 21:40:40.724929 2533 log.go:172] (0xc000b74f20) (0xc000a6a320) Create stream\nI0607 21:40:40.724964 2533 log.go:172] (0xc000b74f20) (0xc000a6a320) Stream added, broadcasting: 5\nI0607 21:40:40.726071 2533 log.go:172] (0xc000b74f20) Reply frame received for 5\nI0607 21:40:40.792946 2533 log.go:172] (0xc000b74f20) Data frame received for 5\nI0607 21:40:40.792977 2533 log.go:172] (0xc000a6a320) (5) Data frame handling\nI0607 21:40:40.792998 2533 log.go:172] (0xc000a6a320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:40:40.826052 2533 log.go:172] (0xc000b74f20) Data frame received for 3\nI0607 21:40:40.826101 2533 log.go:172] (0xc000b6a1e0) (3) Data frame handling\nI0607 21:40:40.826190 2533 log.go:172] (0xc000b6a1e0) (3) Data frame sent\nI0607 21:40:40.826251 2533 log.go:172] (0xc000b74f20) Data frame received for 5\nI0607 21:40:40.826274 2533 log.go:172] (0xc000a6a320) (5) Data frame handling\nI0607 21:40:40.826407 2533 log.go:172] (0xc000b74f20) Data frame received for 3\nI0607 21:40:40.826515 2533 log.go:172] (0xc000b6a1e0) (3) Data frame handling\nI0607 21:40:40.828568 2533 log.go:172] (0xc000b74f20) Data frame received for 1\nI0607 21:40:40.828591 2533 log.go:172] (0xc000b92460) (1) Data frame handling\nI0607 21:40:40.828608 2533 log.go:172] (0xc000b92460) (1) Data frame sent\nI0607 21:40:40.828619 2533 log.go:172] (0xc000b74f20) (0xc000b92460) Stream removed, broadcasting: 1\nI0607 21:40:40.828629 2533 log.go:172] (0xc000b74f20) Go away received\nI0607 21:40:40.829283 2533 log.go:172] (0xc000b74f20) (0xc000b92460) Stream removed, broadcasting: 1\nI0607 21:40:40.829304 2533 log.go:172] (0xc000b74f20) (0xc000b6a1e0) Stream removed, broadcasting: 3\nI0607 21:40:40.829309 2533 log.go:172] (0xc000b74f20) (0xc000a6a320) Stream removed, broadcasting: 5\n" Jun 7 21:40:40.834: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:40:40.834: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:40:40.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 7 21:40:41.080: INFO: stderr: "I0607 21:40:40.970214 2553 log.go:172] (0xc000ab7340) (0xc000af0500) Create stream\nI0607 21:40:40.970264 2553 log.go:172] (0xc000ab7340) (0xc000af0500) Stream added, broadcasting: 1\nI0607 21:40:40.972306 2553 log.go:172] (0xc000ab7340) Reply frame received for 1\nI0607 21:40:40.972334 2553 log.go:172] (0xc000ab7340) (0xc0009fe1e0) Create stream\nI0607 21:40:40.972341 2553 log.go:172] (0xc000ab7340) (0xc0009fe1e0) Stream added, broadcasting: 3\nI0607 21:40:40.973459 2553 log.go:172] (0xc000ab7340) Reply frame received for 3\nI0607 21:40:40.973492 2553 log.go:172] (0xc000ab7340) (0xc000af05a0) Create stream\nI0607 21:40:40.973499 2553 log.go:172] (0xc000ab7340) (0xc000af05a0) Stream added, broadcasting: 5\nI0607 21:40:40.974403 2553 log.go:172] (0xc000ab7340) Reply frame received for 5\nI0607 21:40:41.031825 2553 log.go:172] (0xc000ab7340) Data frame received for 5\nI0607 21:40:41.031849 2553 log.go:172] (0xc000af05a0) (5) Data frame handling\nI0607 21:40:41.031863 2553 log.go:172] (0xc000af05a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0607 21:40:41.071502 2553 log.go:172] (0xc000ab7340) Data frame received for 3\nI0607 21:40:41.071594 2553 log.go:172] (0xc0009fe1e0) (3) Data frame handling\nI0607 21:40:41.071621 2553 log.go:172] (0xc0009fe1e0) (3) Data frame sent\nI0607 21:40:41.071875 2553 log.go:172] (0xc000ab7340) Data frame received for 3\nI0607 21:40:41.071899 2553 log.go:172] (0xc0009fe1e0) (3) Data frame handling\nI0607 21:40:41.072193 2553 log.go:172] (0xc000ab7340) Data frame received for 5\nI0607 21:40:41.072212 2553 log.go:172] (0xc000af05a0) (5) Data frame handling\nI0607 21:40:41.073843 2553 log.go:172] (0xc000ab7340) Data frame received for 1\nI0607 21:40:41.073855 2553 log.go:172] (0xc000af0500) (1) Data frame handling\nI0607 21:40:41.073861 2553 log.go:172] (0xc000af0500) (1) Data frame sent\nI0607 21:40:41.074088 2553 log.go:172] (0xc000ab7340) (0xc000af0500) Stream removed, broadcasting: 1\nI0607 21:40:41.074111 2553 log.go:172] (0xc000ab7340) Go away received\nI0607 21:40:41.074426 2553 log.go:172] (0xc000ab7340) (0xc000af0500) Stream removed, broadcasting: 1\nI0607 21:40:41.074446 2553 log.go:172] (0xc000ab7340) (0xc0009fe1e0) Stream removed, broadcasting: 3\nI0607 21:40:41.074456 2553 log.go:172] (0xc000ab7340) (0xc000af05a0) Stream removed, broadcasting: 5\n" Jun 7 21:40:41.080: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 7 21:40:41.080: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 7 21:40:41.080: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:40:41.083: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 7 21:40:51.092: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:40:51.092: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:40:51.092: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 7 21:40:51.108: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:51.108: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:51.108: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:51.108: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:51.108: INFO: Jun 7 21:40:51.108: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 21:40:52.114: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:52.114: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:52.114: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:52.114: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:52.114: INFO: Jun 7 21:40:52.114: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 21:40:53.119: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:53.119: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:53.119: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:53.119: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:53.119: INFO: Jun 7 21:40:53.119: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 7 21:40:54.139: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:54.139: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:54.139: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:54.139: INFO: Jun 7 21:40:54.139: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 7 21:40:55.144: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:55.144: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:55.144: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:55.144: INFO: Jun 7 21:40:55.144: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 7 21:40:56.148: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:56.148: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:56.148: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:56.148: INFO: Jun 7 21:40:56.148: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 7 21:40:57.153: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:57.153: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:57.158: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:57.158: INFO: Jun 7 21:40:57.158: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 7 21:40:58.164: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:58.164: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:58.164: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:58.164: INFO: Jun 7 21:40:58.164: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 7 21:40:59.168: INFO: POD NODE PHASE GRACE CONDITIONS Jun 7 21:40:59.168: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:39:59 +0000 UTC }] Jun 7 21:40:59.168: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-07 21:40:19 +0000 UTC }] Jun 7 21:40:59.168: INFO: Jun 7 21:40:59.168: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 7 21:41:00.173: INFO: Verifying statefulset ss doesn't scale past 0 for another 931.996015ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2873 Jun 7 21:41:01.176: INFO: Scaling statefulset ss to 0 Jun 7 21:41:01.183: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 7 21:41:01.185: INFO: Deleting all statefulset in ns statefulset-2873 Jun 7 21:41:01.186: INFO: Scaling statefulset ss to 0 Jun 7 21:41:01.191: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 21:41:01.193: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:41:01.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2873" for this suite. • [SLOW TEST:62.282 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":131,"skipped":2049,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:41:01.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 7 21:41:01.335: INFO: Created pod &Pod{ObjectMeta:{dns-751 dns-751 /api/v1/namespaces/dns-751/pods/dns-751 2cea140d-4429-47ac-9c5c-727ac89eaf99 22533094 0 2020-06-07 21:41:01 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t2r7k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t2r7k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t2r7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jun 7 21:41:05.346: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-751 PodName:dns-751 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:41:05.346: INFO: >>> kubeConfig: /root/.kube/config I0607 21:41:05.375035 6 log.go:172] (0xc0050b0630) (0xc0013c03c0) Create stream I0607 21:41:05.375083 6 log.go:172] (0xc0050b0630) (0xc0013c03c0) Stream added, broadcasting: 1 I0607 21:41:05.377480 6 log.go:172] (0xc0050b0630) Reply frame received for 1 I0607 21:41:05.377540 6 log.go:172] (0xc0050b0630) (0xc0027b4000) Create stream I0607 21:41:05.377558 6 log.go:172] (0xc0050b0630) (0xc0027b4000) Stream added, broadcasting: 3 I0607 21:41:05.378301 6 log.go:172] (0xc0050b0630) Reply frame received for 3 I0607 21:41:05.378328 6 log.go:172] (0xc0050b0630) (0xc00282c820) Create stream I0607 21:41:05.378339 6 log.go:172] (0xc0050b0630) (0xc00282c820) Stream added, broadcasting: 5 I0607 21:41:05.378971 6 log.go:172] (0xc0050b0630) Reply frame received for 5 I0607 21:41:05.451397 6 log.go:172] (0xc0050b0630) Data frame received for 3 I0607 21:41:05.451429 6 log.go:172] (0xc0027b4000) (3) Data frame handling I0607 21:41:05.451507 6 log.go:172] (0xc0027b4000) (3) Data frame sent I0607 21:41:05.453809 6 log.go:172] (0xc0050b0630) Data frame received for 3 I0607 21:41:05.453836 6 log.go:172] (0xc0027b4000) (3) Data frame handling I0607 21:41:05.453870 6 log.go:172] (0xc0050b0630) Data frame received for 5 I0607 21:41:05.453906 6 log.go:172] (0xc00282c820) (5) Data frame handling I0607 21:41:05.455799 6 log.go:172] (0xc0050b0630) Data frame received for 1 I0607 21:41:05.455835 6 log.go:172] (0xc0013c03c0) (1) Data frame handling I0607 21:41:05.455858 6 log.go:172] (0xc0013c03c0) (1) Data frame sent I0607 21:41:05.455879 6 log.go:172] (0xc0050b0630) (0xc0013c03c0) Stream removed, broadcasting: 1 I0607 21:41:05.455928 6 log.go:172] (0xc0050b0630) Go away received I0607 21:41:05.456159 6 log.go:172] (0xc0050b0630) (0xc0013c03c0) Stream removed, broadcasting: 1 I0607 21:41:05.456200 6 log.go:172] (0xc0050b0630) (0xc0027b4000) Stream removed, broadcasting: 3 I0607 21:41:05.456225 6 log.go:172] (0xc0050b0630) (0xc00282c820) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jun 7 21:41:05.456: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-751 PodName:dns-751 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:41:05.456: INFO: >>> kubeConfig: /root/.kube/config I0607 21:41:05.484141 6 log.go:172] (0xc0026c1a20) (0xc00282ce60) Create stream I0607 21:41:05.484192 6 log.go:172] (0xc0026c1a20) (0xc00282ce60) Stream added, broadcasting: 1 I0607 21:41:05.487166 6 log.go:172] (0xc0026c1a20) Reply frame received for 1 I0607 21:41:05.487217 6 log.go:172] (0xc0026c1a20) (0xc00282cf00) Create stream I0607 21:41:05.487244 6 log.go:172] (0xc0026c1a20) (0xc00282cf00) Stream added, broadcasting: 3 I0607 21:41:05.488674 6 log.go:172] (0xc0026c1a20) Reply frame received for 3 I0607 21:41:05.488730 6 log.go:172] (0xc0026c1a20) (0xc001e04460) Create stream I0607 21:41:05.488745 6 log.go:172] (0xc0026c1a20) (0xc001e04460) Stream added, broadcasting: 5 I0607 21:41:05.490062 6 log.go:172] (0xc0026c1a20) Reply frame received for 5 I0607 21:41:05.571005 6 log.go:172] (0xc0026c1a20) Data frame received for 3 I0607 21:41:05.571059 6 log.go:172] (0xc00282cf00) (3) Data frame handling I0607 21:41:05.571100 6 log.go:172] (0xc00282cf00) (3) Data frame sent I0607 21:41:05.572330 6 log.go:172] (0xc0026c1a20) Data frame received for 5 I0607 21:41:05.572375 6 log.go:172] (0xc0026c1a20) Data frame received for 3 I0607 21:41:05.572431 6 log.go:172] (0xc00282cf00) (3) Data frame handling I0607 21:41:05.572494 6 log.go:172] (0xc001e04460) (5) Data frame handling I0607 21:41:05.574286 6 log.go:172] (0xc0026c1a20) Data frame received for 1 I0607 21:41:05.574325 6 log.go:172] (0xc00282ce60) (1) Data frame handling I0607 21:41:05.574454 6 log.go:172] (0xc00282ce60) (1) Data frame sent I0607 21:41:05.574481 6 log.go:172] (0xc0026c1a20) (0xc00282ce60) Stream removed, broadcasting: 1 I0607 21:41:05.574505 6 log.go:172] (0xc0026c1a20) Go away received I0607 21:41:05.574675 6 log.go:172] (0xc0026c1a20) (0xc00282ce60) Stream removed, broadcasting: 1 I0607 21:41:05.574717 6 log.go:172] (0xc0026c1a20) (0xc00282cf00) Stream removed, broadcasting: 3 I0607 21:41:05.574800 6 log.go:172] (0xc0026c1a20) (0xc001e04460) Stream removed, broadcasting: 5 Jun 7 21:41:05.574: INFO: Deleting pod dns-751... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:41:05.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-751" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":132,"skipped":2060,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:41:05.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:41:05.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6" in namespace "downward-api-1625" to be "success or failure" Jun 7 21:41:05.937: INFO: Pod "downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 234.14959ms Jun 7 21:41:07.942: INFO: Pod "downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238289749s Jun 7 21:41:09.946: INFO: Pod "downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.242636087s STEP: Saw pod success Jun 7 21:41:09.946: INFO: Pod "downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6" satisfied condition "success or failure" Jun 7 21:41:09.949: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6 container client-container: STEP: delete the pod Jun 7 21:41:09.984: INFO: Waiting for pod downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6 to disappear Jun 7 21:41:10.000: INFO: Pod downwardapi-volume-57efeb5b-af57-4417-990f-ea3cc5259cb6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:41:10.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1625" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:41:10.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jun 7 21:41:10.075: INFO: Waiting up to 5m0s for pod "var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e" in namespace "var-expansion-2942" to be "success or failure" Jun 7 21:41:10.085: INFO: Pod "var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.149199ms Jun 7 21:41:12.088: INFO: Pod "var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012889791s Jun 7 21:41:14.093: INFO: Pod "var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017787322s STEP: Saw pod success Jun 7 21:41:14.093: INFO: Pod "var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e" satisfied condition "success or failure" Jun 7 21:41:14.096: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e container dapi-container: STEP: delete the pod Jun 7 21:41:14.116: INFO: Waiting for pod var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e to disappear Jun 7 21:41:14.120: INFO: Pod var-expansion-d5d3b9fd-5013-482e-bdab-71334d66909e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:41:14.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2942" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2100,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:41:14.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-hvxv STEP: Creating a pod to test atomic-volume-subpath Jun 7 21:41:14.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hvxv" in namespace "subpath-3853" to be "success or failure" Jun 7 21:41:14.238: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Pending", Reason="", readiness=false. Elapsed: 24.91267ms Jun 7 21:41:16.259: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045692905s Jun 7 21:41:18.264: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 4.050503503s Jun 7 21:41:20.269: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 6.055568367s Jun 7 21:41:22.274: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 8.060311885s Jun 7 21:41:24.278: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 10.064356263s Jun 7 21:41:26.361: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 12.147811783s Jun 7 21:41:28.366: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 14.152405783s Jun 7 21:41:30.385: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 16.17188367s Jun 7 21:41:32.397: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 18.18361004s Jun 7 21:41:34.401: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 20.18775579s Jun 7 21:41:36.410: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Running", Reason="", readiness=true. Elapsed: 22.196895629s Jun 7 21:41:38.414: INFO: Pod "pod-subpath-test-secret-hvxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.201121719s STEP: Saw pod success Jun 7 21:41:38.415: INFO: Pod "pod-subpath-test-secret-hvxv" satisfied condition "success or failure" Jun 7 21:41:38.418: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-hvxv container test-container-subpath-secret-hvxv: STEP: delete the pod Jun 7 21:41:38.653: INFO: Waiting for pod pod-subpath-test-secret-hvxv to disappear Jun 7 21:41:38.665: INFO: Pod pod-subpath-test-secret-hvxv no longer exists STEP: Deleting pod pod-subpath-test-secret-hvxv Jun 7 21:41:38.665: INFO: Deleting pod "pod-subpath-test-secret-hvxv" in namespace "subpath-3853" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:41:38.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3853" for this suite. • [SLOW TEST:24.545 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":135,"skipped":2105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:41:38.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rzzcf in namespace proxy-1507 I0607 21:41:38.779959 6 runners.go:189] Created replication controller with name: proxy-service-rzzcf, namespace: proxy-1507, replica count: 1 I0607 21:41:39.830406 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:41:40.830641 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:41:41.830883 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 21:41:42.831083 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 21:41:43.831386 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 21:41:44.831642 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 21:41:45.831923 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0607 21:41:46.832143 6 runners.go:189] proxy-service-rzzcf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 7 21:41:46.836: INFO: setup took 8.103061861s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 7 21:41:46.843: INFO: (0) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 6.106301ms) Jun 7 21:41:46.844: INFO: (0) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 7.113609ms) Jun 7 21:41:46.845: INFO: (0) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 8.028109ms) Jun 7 21:41:46.845: INFO: (0) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 8.218371ms) Jun 7 21:41:46.845: INFO: (0) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 8.594773ms) Jun 7 21:41:46.846: INFO: (0) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 8.733007ms) Jun 7 21:41:46.848: INFO: (0) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 10.702753ms) Jun 7 21:41:46.848: INFO: (0) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 11.285121ms) Jun 7 21:41:46.848: INFO: (0) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 11.618792ms) Jun 7 21:41:46.849: INFO: (0) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 11.830186ms) Jun 7 21:41:46.849: INFO: (0) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 11.679629ms) Jun 7 21:41:46.853: INFO: (0) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 16.021808ms) Jun 7 21:41:46.853: INFO: (0) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 16.155469ms) Jun 7 21:41:46.855: INFO: (0) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 5.45764ms) Jun 7 21:41:46.861: INFO: (1) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.771071ms) Jun 7 21:41:46.861: INFO: (1) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 5.827423ms) Jun 7 21:41:46.862: INFO: (1) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 6.41673ms) Jun 7 21:41:46.862: INFO: (1) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 6.57517ms) Jun 7 21:41:46.862: INFO: (1) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 6.618588ms) Jun 7 21:41:46.862: INFO: (1) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 6.716723ms) Jun 7 21:41:46.862: INFO: (1) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test<... (200; 3.650777ms) Jun 7 21:41:46.867: INFO: (2) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 3.981573ms) Jun 7 21:41:46.867: INFO: (2) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 4.090498ms) Jun 7 21:41:46.867: INFO: (2) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.255681ms) Jun 7 21:41:46.867: INFO: (2) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.279332ms) Jun 7 21:41:46.868: INFO: (2) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.991287ms) Jun 7 21:41:46.868: INFO: (2) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 5.077179ms) Jun 7 21:41:46.868: INFO: (2) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 5.116799ms) Jun 7 21:41:46.868: INFO: (2) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 5.979946ms) Jun 7 21:41:46.869: INFO: (2) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 5.958623ms) Jun 7 21:41:46.869: INFO: (2) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 5.885932ms) Jun 7 21:41:46.869: INFO: (2) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 5.90284ms) Jun 7 21:41:46.871: INFO: (3) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 2.254224ms) Jun 7 21:41:46.873: INFO: (3) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 4.125289ms) Jun 7 21:41:46.874: INFO: (3) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.468947ms) Jun 7 21:41:46.874: INFO: (3) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.496396ms) Jun 7 21:41:46.874: INFO: (3) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.056517ms) Jun 7 21:41:46.874: INFO: (3) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 5.024139ms) Jun 7 21:41:46.874: INFO: (3) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 5.086533ms) Jun 7 21:41:46.874: INFO: (3) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 5.195804ms) Jun 7 21:41:46.874: INFO: (3) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 5.230279ms) Jun 7 21:41:46.875: INFO: (3) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 5.733081ms) Jun 7 21:41:46.875: INFO: (3) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 3.251569ms) Jun 7 21:41:46.879: INFO: (4) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 3.578524ms) Jun 7 21:41:46.879: INFO: (4) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 3.664645ms) Jun 7 21:41:46.879: INFO: (4) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 3.665623ms) Jun 7 21:41:46.879: INFO: (4) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.763686ms) Jun 7 21:41:46.880: INFO: (4) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.198732ms) Jun 7 21:41:46.880: INFO: (4) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 4.401133ms) Jun 7 21:41:46.880: INFO: (4) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 4.760569ms) Jun 7 21:41:46.881: INFO: (4) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 4.441454ms) Jun 7 21:41:46.887: INFO: (5) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test<... (200; 6.21783ms) Jun 7 21:41:46.889: INFO: (5) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 6.239561ms) Jun 7 21:41:46.889: INFO: (5) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 6.262656ms) Jun 7 21:41:46.889: INFO: (5) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 6.414799ms) Jun 7 21:41:46.889: INFO: (5) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 6.411107ms) Jun 7 21:41:46.889: INFO: (5) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 6.434367ms) Jun 7 21:41:46.892: INFO: (6) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 3.264775ms) Jun 7 21:41:46.892: INFO: (6) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.205027ms) Jun 7 21:41:46.892: INFO: (6) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 3.349146ms) Jun 7 21:41:46.892: INFO: (6) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 3.37214ms) Jun 7 21:41:46.892: INFO: (6) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 3.616194ms) Jun 7 21:41:46.894: INFO: (6) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 5.096159ms) Jun 7 21:41:46.894: INFO: (6) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 5.087803ms) Jun 7 21:41:46.894: INFO: (6) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 5.304047ms) Jun 7 21:41:46.894: INFO: (6) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.45282ms) Jun 7 21:41:46.895: INFO: (6) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 5.979553ms) Jun 7 21:41:46.895: INFO: (6) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 6.230746ms) Jun 7 21:41:46.895: INFO: (6) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 6.31428ms) Jun 7 21:41:46.895: INFO: (6) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 6.31313ms) Jun 7 21:41:46.895: INFO: (6) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 6.333803ms) Jun 7 21:41:46.895: INFO: (6) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 6.507306ms) Jun 7 21:41:46.899: INFO: (7) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 3.671792ms) Jun 7 21:41:46.900: INFO: (7) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.158017ms) Jun 7 21:41:46.900: INFO: (7) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test (200; 4.666408ms) Jun 7 21:41:46.900: INFO: (7) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.721457ms) Jun 7 21:41:46.900: INFO: (7) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 4.755878ms) Jun 7 21:41:46.900: INFO: (7) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 4.929979ms) Jun 7 21:41:46.900: INFO: (7) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 4.855343ms) Jun 7 21:41:46.901: INFO: (7) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 5.877805ms) Jun 7 21:41:46.902: INFO: (7) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 6.051512ms) Jun 7 21:41:46.902: INFO: (7) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 6.010306ms) Jun 7 21:41:46.906: INFO: (8) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.435125ms) Jun 7 21:41:46.906: INFO: (8) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 4.564115ms) Jun 7 21:41:46.906: INFO: (8) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 4.745346ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 4.872431ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 5.028936ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 4.979223ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 4.988455ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 5.052176ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 5.093433ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.719716ms) Jun 7 21:41:46.907: INFO: (8) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 5.664216ms) Jun 7 21:41:46.908: INFO: (8) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 5.823894ms) Jun 7 21:41:46.908: INFO: (8) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.77714ms) Jun 7 21:41:46.908: INFO: (8) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 5.915928ms) Jun 7 21:41:46.910: INFO: (9) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 2.768602ms) Jun 7 21:41:46.912: INFO: (9) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 2.701505ms) Jun 7 21:41:46.912: INFO: (9) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.215837ms) Jun 7 21:41:46.912: INFO: (9) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 3.425587ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.70601ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test<... (200; 5.57938ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 5.51136ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 4.449208ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 5.507427ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 5.837392ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.341455ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 5.492987ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 4.506293ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 6.51151ms) Jun 7 21:41:46.914: INFO: (9) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 5.78751ms) Jun 7 21:41:46.918: INFO: (10) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 3.399541ms) Jun 7 21:41:46.918: INFO: (10) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 3.377542ms) Jun 7 21:41:46.918: INFO: (10) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test (200; 4.895707ms) Jun 7 21:41:46.919: INFO: (10) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 4.972912ms) Jun 7 21:41:46.919: INFO: (10) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.88674ms) Jun 7 21:41:46.919: INFO: (10) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 5.029447ms) Jun 7 21:41:46.919: INFO: (10) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 5.040101ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 5.0364ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.156941ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 5.105923ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 5.091566ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 5.119277ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 5.266219ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 5.260887ms) Jun 7 21:41:46.920: INFO: (10) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 5.209645ms) Jun 7 21:41:46.923: INFO: (11) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.500459ms) Jun 7 21:41:46.923: INFO: (11) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 3.559716ms) Jun 7 21:41:46.924: INFO: (11) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test<... (200; 3.968777ms) Jun 7 21:41:46.924: INFO: (11) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.068ms) Jun 7 21:41:46.924: INFO: (11) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 4.141532ms) Jun 7 21:41:46.924: INFO: (11) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.369577ms) Jun 7 21:41:46.924: INFO: (11) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.351783ms) Jun 7 21:41:46.925: INFO: (11) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 5.306675ms) Jun 7 21:41:46.925: INFO: (11) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 5.666382ms) Jun 7 21:41:46.926: INFO: (11) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 5.908727ms) Jun 7 21:41:46.926: INFO: (11) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 6.225785ms) Jun 7 21:41:46.926: INFO: (11) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 6.373022ms) Jun 7 21:41:46.926: INFO: (11) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 6.61383ms) Jun 7 21:41:46.929: INFO: (12) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 2.887234ms) Jun 7 21:41:46.929: INFO: (12) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test (200; 5.375513ms) Jun 7 21:41:46.932: INFO: (12) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 5.577439ms) Jun 7 21:41:46.932: INFO: (12) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 5.749329ms) Jun 7 21:41:46.932: INFO: (12) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.697488ms) Jun 7 21:41:46.934: INFO: (12) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 7.474234ms) Jun 7 21:41:46.934: INFO: (12) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 7.824143ms) Jun 7 21:41:46.937: INFO: (13) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test<... (200; 3.489736ms) Jun 7 21:41:46.938: INFO: (13) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.609096ms) Jun 7 21:41:46.938: INFO: (13) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.575015ms) Jun 7 21:41:46.938: INFO: (13) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 3.590339ms) Jun 7 21:41:46.938: INFO: (13) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 3.578654ms) Jun 7 21:41:46.938: INFO: (13) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 3.555754ms) Jun 7 21:41:46.938: INFO: (13) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 3.555069ms) Jun 7 21:41:46.938: INFO: (13) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 3.721653ms) Jun 7 21:41:46.939: INFO: (13) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 4.603449ms) Jun 7 21:41:46.939: INFO: (13) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 4.72643ms) Jun 7 21:41:46.939: INFO: (13) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 4.774619ms) Jun 7 21:41:46.939: INFO: (13) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 4.887588ms) Jun 7 21:41:46.939: INFO: (13) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 4.990674ms) Jun 7 21:41:46.939: INFO: (13) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 4.913458ms) Jun 7 21:41:46.943: INFO: (14) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 3.798481ms) Jun 7 21:41:46.944: INFO: (14) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 4.049859ms) Jun 7 21:41:46.944: INFO: (14) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 4.343333ms) Jun 7 21:41:46.944: INFO: (14) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.641835ms) Jun 7 21:41:46.944: INFO: (14) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.594357ms) Jun 7 21:41:46.944: INFO: (14) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test<... (200; 4.971598ms) Jun 7 21:41:46.946: INFO: (14) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 5.941957ms) Jun 7 21:41:46.946: INFO: (14) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 6.017754ms) Jun 7 21:41:46.946: INFO: (14) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 6.002951ms) Jun 7 21:41:46.946: INFO: (14) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 6.122357ms) Jun 7 21:41:46.946: INFO: (14) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 6.021419ms) Jun 7 21:41:46.946: INFO: (14) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 6.054727ms) Jun 7 21:41:46.946: INFO: (14) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 6.305934ms) Jun 7 21:41:46.948: INFO: (15) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 2.412446ms) Jun 7 21:41:46.951: INFO: (15) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: test<... (200; 4.68927ms) Jun 7 21:41:46.951: INFO: (15) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.632292ms) Jun 7 21:41:46.951: INFO: (15) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 4.658299ms) Jun 7 21:41:46.951: INFO: (15) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 4.751999ms) Jun 7 21:41:46.951: INFO: (15) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 4.859203ms) Jun 7 21:41:46.951: INFO: (15) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.804749ms) Jun 7 21:41:46.951: INFO: (15) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 4.791546ms) Jun 7 21:41:46.952: INFO: (15) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 5.736229ms) Jun 7 21:41:46.952: INFO: (15) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 5.81251ms) Jun 7 21:41:46.953: INFO: (15) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 7.271763ms) Jun 7 21:41:46.953: INFO: (15) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 7.289449ms) Jun 7 21:41:46.953: INFO: (15) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 7.295426ms) Jun 7 21:41:46.953: INFO: (15) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 7.268666ms) Jun 7 21:41:46.953: INFO: (15) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 7.298866ms) Jun 7 21:41:46.957: INFO: (16) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 3.51653ms) Jun 7 21:41:46.958: INFO: (16) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 4.143907ms) Jun 7 21:41:46.958: INFO: (16) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:1080/proxy/: ... (200; 4.137454ms) Jun 7 21:41:46.958: INFO: (16) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 4.218787ms) Jun 7 21:41:46.958: INFO: (16) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 4.257489ms) Jun 7 21:41:46.958: INFO: (16) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 4.728886ms) Jun 7 21:41:46.959: INFO: (16) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 5.287368ms) Jun 7 21:41:46.959: INFO: (16) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 5.363614ms) Jun 7 21:41:46.959: INFO: (16) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 5.336332ms) Jun 7 21:41:46.959: INFO: (16) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 10.408656ms) Jun 7 21:41:46.970: INFO: (17) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 10.296441ms) Jun 7 21:41:46.970: INFO: (17) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 10.351948ms) Jun 7 21:41:46.970: INFO: (17) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 10.438381ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname2/proxy/: bar (200; 14.392826ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname2/proxy/: tls qux (200; 14.579018ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname2/proxy/: bar (200; 14.543607ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/services/proxy-service-rzzcf:portname1/proxy/: foo (200; 14.502073ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/services/https:proxy-service-rzzcf:tlsportname1/proxy/: tls baz (200; 14.598166ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 14.642032ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 14.643072ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 14.605421ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 14.594276ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/services/http:proxy-service-rzzcf:portname1/proxy/: foo (200; 14.645792ms) Jun 7 21:41:46.974: INFO: (17) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 3.330057ms) Jun 7 21:41:46.978: INFO: (18) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 3.387494ms) Jun 7 21:41:46.978: INFO: (18) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 3.597718ms) Jun 7 21:41:46.978: INFO: (18) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 3.648128ms) Jun 7 21:41:46.978: INFO: (18) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 3.687341ms) Jun 7 21:41:46.978: INFO: (18) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 3.688854ms) Jun 7 21:41:46.979: INFO: (18) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 3.778334ms) Jun 7 21:41:46.979: INFO: (18) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: ... (200; 3.138692ms) Jun 7 21:41:46.983: INFO: (19) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.079251ms) Jun 7 21:41:46.983: INFO: (19) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:462/proxy/: tls qux (200; 3.38855ms) Jun 7 21:41:46.984: INFO: (19) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:162/proxy/: bar (200; 3.332615ms) Jun 7 21:41:46.984: INFO: (19) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg:1080/proxy/: test<... (200; 3.457886ms) Jun 7 21:41:46.984: INFO: (19) /api/v1/namespaces/proxy-1507/pods/proxy-service-rzzcf-ntfcg/proxy/: test (200; 3.541462ms) Jun 7 21:41:46.984: INFO: (19) /api/v1/namespaces/proxy-1507/pods/http:proxy-service-rzzcf-ntfcg:160/proxy/: foo (200; 3.470349ms) Jun 7 21:41:46.984: INFO: (19) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:460/proxy/: tls baz (200; 3.465637ms) Jun 7 21:41:46.984: INFO: (19) /api/v1/namespaces/proxy-1507/pods/https:proxy-service-rzzcf-ntfcg:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 7 21:41:54.842: INFO: Successfully updated pod "pod-update-40e38396-c0df-4e56-a453-c50412502ba6" STEP: verifying the updated pod is in kubernetes Jun 7 21:41:54.854: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:41:54.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-364" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2213,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:41:54.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:41:55.410: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:41:57.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162915, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162915, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162915, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162915, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:42:00.460: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:42:00.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5562" for this suite. STEP: Destroying namespace "webhook-5562-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.901 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":138,"skipped":2218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:42:00.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0607 21:42:41.007518 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 21:42:41.007: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:42:41.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7415" for this suite. • [SLOW TEST:40.252 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":139,"skipped":2265,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:42:41.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 7 21:42:45.234: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7352 PodName:pod-sharedvolume-d1e3dcea-63ad-42b7-9860-cf5a0298a4a8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:42:45.234: INFO: >>> kubeConfig: /root/.kube/config I0607 21:42:45.312317 6 log.go:172] (0xc002bd6840) (0xc000ba8960) Create stream I0607 21:42:45.312360 6 log.go:172] (0xc002bd6840) (0xc000ba8960) Stream added, broadcasting: 1 I0607 21:42:45.314528 6 log.go:172] (0xc002bd6840) Reply frame received for 1 I0607 21:42:45.314565 6 log.go:172] (0xc002bd6840) (0xc00040cfa0) Create stream I0607 21:42:45.314576 6 log.go:172] (0xc002bd6840) (0xc00040cfa0) Stream added, broadcasting: 3 I0607 21:42:45.315467 6 log.go:172] (0xc002bd6840) Reply frame received for 3 I0607 21:42:45.315540 6 log.go:172] (0xc002bd6840) (0xc00040d220) Create stream I0607 21:42:45.315561 6 log.go:172] (0xc002bd6840) (0xc00040d220) Stream added, broadcasting: 5 I0607 21:42:45.316624 6 log.go:172] (0xc002bd6840) Reply frame received for 5 I0607 21:42:45.382570 6 log.go:172] (0xc002bd6840) Data frame received for 5 I0607 21:42:45.382613 6 log.go:172] (0xc00040d220) (5) Data frame handling I0607 21:42:45.382633 6 log.go:172] (0xc002bd6840) Data frame received for 3 I0607 21:42:45.382647 6 log.go:172] (0xc00040cfa0) (3) Data frame handling I0607 21:42:45.382656 6 log.go:172] (0xc00040cfa0) (3) Data frame sent I0607 21:42:45.382663 6 log.go:172] (0xc002bd6840) Data frame received for 3 I0607 21:42:45.382681 6 log.go:172] (0xc00040cfa0) (3) Data frame handling I0607 21:42:45.384166 6 log.go:172] (0xc002bd6840) Data frame received for 1 I0607 21:42:45.384190 6 log.go:172] (0xc000ba8960) (1) Data frame handling I0607 21:42:45.384210 6 log.go:172] (0xc000ba8960) (1) Data frame sent I0607 21:42:45.384229 6 log.go:172] (0xc002bd6840) (0xc000ba8960) Stream removed, broadcasting: 1 I0607 21:42:45.384323 6 log.go:172] (0xc002bd6840) (0xc000ba8960) Stream removed, broadcasting: 1 I0607 21:42:45.384346 6 log.go:172] (0xc002bd6840) (0xc00040cfa0) Stream removed, broadcasting: 3 I0607 21:42:45.384374 6 log.go:172] (0xc002bd6840) (0xc00040d220) Stream removed, broadcasting: 5 Jun 7 21:42:45.384: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 I0607 21:42:45.384460 6 log.go:172] (0xc002bd6840) Go away received Jun 7 21:42:45.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7352" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":140,"skipped":2282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:42:45.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:42:46.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:42:48.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 21:42:50.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727162966, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:42:53.235: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:42:53.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2270-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:42:54.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9213" for this suite. STEP: Destroying namespace "webhook-9213-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.302 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":141,"skipped":2315,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:42:54.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8000/secret-test-8ea567de-174c-490c-b04a-2e71c488588e STEP: Creating a pod to test consume secrets Jun 7 21:42:54.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a" in namespace "secrets-8000" to be "success or failure" Jun 7 21:42:54.759: INFO: Pod "pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.293687ms Jun 7 21:42:56.909: INFO: Pod "pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156368191s Jun 7 21:42:58.914: INFO: Pod "pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161011552s STEP: Saw pod success Jun 7 21:42:58.914: INFO: Pod "pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a" satisfied condition "success or failure" Jun 7 21:42:58.917: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a container env-test: STEP: delete the pod Jun 7 21:42:58.963: INFO: Waiting for pod pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a to disappear Jun 7 21:42:59.004: INFO: Pod pod-configmaps-4fb4067e-202b-4704-856a-defbdc4eb82a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:42:59.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8000" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2331,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:42:59.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e316530e-7c3a-400a-9111-19eea7a882e9 STEP: Creating a pod to test consume configMaps Jun 7 21:42:59.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64" in namespace "configmap-4740" to be "success or failure" Jun 7 21:42:59.136: INFO: Pod "pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.80567ms Jun 7 21:43:01.160: INFO: Pod "pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029116905s Jun 7 21:43:03.171: INFO: Pod "pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040280006s STEP: Saw pod success Jun 7 21:43:03.171: INFO: Pod "pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64" satisfied condition "success or failure" Jun 7 21:43:03.175: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64 container configmap-volume-test: STEP: delete the pod Jun 7 21:43:03.216: INFO: Waiting for pod pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64 to disappear Jun 7 21:43:03.261: INFO: Pod pod-configmaps-935a3203-acae-4e9c-8866-42ef6f38ad64 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:43:03.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4740" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2339,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:43:03.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-9wlf STEP: Creating a pod to test atomic-volume-subpath Jun 7 21:43:03.430: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9wlf" in namespace "subpath-2753" to be "success or failure" Jun 7 21:43:03.445: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.710343ms Jun 7 21:43:05.449: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019312293s Jun 7 21:43:07.453: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 4.023281163s Jun 7 21:43:09.458: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 6.027918622s Jun 7 21:43:11.461: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 8.031319784s Jun 7 21:43:13.483: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 10.052897689s Jun 7 21:43:15.487: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 12.056996215s Jun 7 21:43:17.490: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 14.059918275s Jun 7 21:43:19.511: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 16.080903661s Jun 7 21:43:21.515: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 18.084791593s Jun 7 21:43:23.519: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 20.088877182s Jun 7 21:43:25.523: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Running", Reason="", readiness=true. Elapsed: 22.092741764s Jun 7 21:43:27.526: INFO: Pod "pod-subpath-test-projected-9wlf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.096003506s STEP: Saw pod success Jun 7 21:43:27.526: INFO: Pod "pod-subpath-test-projected-9wlf" satisfied condition "success or failure" Jun 7 21:43:27.528: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-9wlf container test-container-subpath-projected-9wlf: STEP: delete the pod Jun 7 21:43:27.574: INFO: Waiting for pod pod-subpath-test-projected-9wlf to disappear Jun 7 21:43:27.591: INFO: Pod pod-subpath-test-projected-9wlf no longer exists STEP: Deleting pod pod-subpath-test-projected-9wlf Jun 7 21:43:27.591: INFO: Deleting pod "pod-subpath-test-projected-9wlf" in namespace "subpath-2753" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:43:27.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2753" for this suite. • [SLOW TEST:24.342 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":144,"skipped":2346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:43:27.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:43:27.771: INFO: Create a RollingUpdate DaemonSet Jun 7 21:43:27.775: INFO: Check that daemon pods launch on every node of the cluster Jun 7 21:43:27.849: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:27.851: INFO: Number of nodes with available pods: 0 Jun 7 21:43:27.851: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:43:28.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:28.919: INFO: Number of nodes with available pods: 0 Jun 7 21:43:28.919: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:43:30.070: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:30.167: INFO: Number of nodes with available pods: 0 Jun 7 21:43:30.167: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:43:30.907: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:30.911: INFO: Number of nodes with available pods: 0 Jun 7 21:43:30.911: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:43:31.857: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:31.861: INFO: Number of nodes with available pods: 2 Jun 7 21:43:31.861: INFO: Number of running nodes: 2, number of available pods: 2 Jun 7 21:43:31.861: INFO: Update the DaemonSet to trigger a rollout Jun 7 21:43:31.868: INFO: Updating DaemonSet daemon-set Jun 7 21:43:39.908: INFO: Roll back the DaemonSet before rollout is complete Jun 7 21:43:39.914: INFO: Updating DaemonSet daemon-set Jun 7 21:43:39.914: INFO: Make sure DaemonSet rollback is complete Jun 7 21:43:39.958: INFO: Wrong image for pod: daemon-set-7kkf8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 7 21:43:39.958: INFO: Pod daemon-set-7kkf8 is not available Jun 7 21:43:39.994: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:40.998: INFO: Wrong image for pod: daemon-set-7kkf8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 7 21:43:40.998: INFO: Pod daemon-set-7kkf8 is not available Jun 7 21:43:41.001: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:41.999: INFO: Wrong image for pod: daemon-set-7kkf8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 7 21:43:41.999: INFO: Pod daemon-set-7kkf8 is not available Jun 7 21:43:42.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:43:43.009: INFO: Pod daemon-set-r7xff is not available Jun 7 21:43:43.024: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7691, will wait for the garbage collector to delete the pods Jun 7 21:43:43.155: INFO: Deleting DaemonSet.extensions daemon-set took: 11.41342ms Jun 7 21:43:43.555: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.202692ms Jun 7 21:43:49.259: INFO: Number of nodes with available pods: 0 Jun 7 21:43:49.259: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 21:43:49.261: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7691/daemonsets","resourceVersion":"22534286"},"items":null} Jun 7 21:43:49.263: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7691/pods","resourceVersion":"22534286"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:43:49.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7691" for this suite. • [SLOW TEST:21.667 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":145,"skipped":2373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:43:49.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:43:49.327: INFO: Creating deployment "test-recreate-deployment" Jun 7 21:43:49.341: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 7 21:43:49.387: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 7 21:43:51.395: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 7 21:43:51.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163029, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163029, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163029, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163029, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 21:43:53.402: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 7 21:43:53.407: INFO: Updating deployment test-recreate-deployment Jun 7 21:43:53.407: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 7 21:43:54.060: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2790 /apis/apps/v1/namespaces/deployment-2790/deployments/test-recreate-deployment 12e08ecf-968a-46d0-b590-61d0bb612fc7 22534347 2 2020-06-07 21:43:49 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003481fe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-07 21:43:54 +0000 UTC,LastTransitionTime:2020-06-07 21:43:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-06-07 21:43:54 +0000 UTC,LastTransitionTime:2020-06-07 21:43:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 7 21:43:54.064: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2790 /apis/apps/v1/namespaces/deployment-2790/replicasets/test-recreate-deployment-5f94c574ff 9228c98b-8497-49e7-8bf6-1bfdae7b479d 22534345 1 2020-06-07 21:43:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 12e08ecf-968a-46d0-b590-61d0bb612fc7 0xc002e7c377 0xc002e7c378}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e7c3d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 7 21:43:54.064: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 7 21:43:54.064: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-2790 /apis/apps/v1/namespaces/deployment-2790/replicasets/test-recreate-deployment-799c574856 68ed3cba-4a29-492e-9388-9683218168bd 22534334 2 2020-06-07 21:43:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 12e08ecf-968a-46d0-b590-61d0bb612fc7 0xc002e7c447 0xc002e7c448}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e7c4b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 7 21:43:54.091: INFO: Pod "test-recreate-deployment-5f94c574ff-qt6tp" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qt6tp test-recreate-deployment-5f94c574ff- deployment-2790 /api/v1/namespaces/deployment-2790/pods/test-recreate-deployment-5f94c574ff-qt6tp 0f3dd73b-e14d-4e9e-a846-52e77d09c616 22534348 0 2020-06-07 21:43:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 9228c98b-8497-49e7-8bf6-1bfdae7b479d 0xc002e7c907 0xc002e7c908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rsqr2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rsqr2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rsqr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:43:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-07 21:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:43:54.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2790" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":146,"skipped":2400,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:43:54.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-svp5 STEP: Creating a pod to test atomic-volume-subpath Jun 7 21:43:54.416: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-svp5" in namespace "subpath-9442" to be "success or failure" Jun 7 21:43:54.455: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.662078ms Jun 7 21:43:56.458: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042339657s Jun 7 21:43:58.462: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 4.045612527s Jun 7 21:44:00.466: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 6.049850614s Jun 7 21:44:02.470: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 8.054364552s Jun 7 21:44:04.475: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 10.058522428s Jun 7 21:44:06.479: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 12.06292602s Jun 7 21:44:08.484: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 14.067853424s Jun 7 21:44:10.488: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 16.072414489s Jun 7 21:44:12.492: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 18.076392418s Jun 7 21:44:14.497: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 20.081179465s Jun 7 21:44:16.502: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Running", Reason="", readiness=true. Elapsed: 22.085729667s Jun 7 21:44:18.506: INFO: Pod "pod-subpath-test-downwardapi-svp5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.089659242s STEP: Saw pod success Jun 7 21:44:18.506: INFO: Pod "pod-subpath-test-downwardapi-svp5" satisfied condition "success or failure" Jun 7 21:44:18.509: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-svp5 container test-container-subpath-downwardapi-svp5: STEP: delete the pod Jun 7 21:44:18.568: INFO: Waiting for pod pod-subpath-test-downwardapi-svp5 to disappear Jun 7 21:44:18.575: INFO: Pod pod-subpath-test-downwardapi-svp5 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-svp5 Jun 7 21:44:18.575: INFO: Deleting pod "pod-subpath-test-downwardapi-svp5" in namespace "subpath-9442" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:44:18.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9442" for this suite. • [SLOW TEST:24.489 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":147,"skipped":2408,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:44:18.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-9daa893b-8931-4bcd-b763-4418a70587f3 STEP: Creating a pod to test consume secrets Jun 7 21:44:18.829: INFO: Waiting up to 5m0s for pod "pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac" in namespace "secrets-9567" to be "success or failure" Jun 7 21:44:18.920: INFO: Pod "pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac": Phase="Pending", Reason="", readiness=false. Elapsed: 91.152363ms Jun 7 21:44:20.935: INFO: Pod "pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105495763s Jun 7 21:44:22.939: INFO: Pod "pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10976613s STEP: Saw pod success Jun 7 21:44:22.939: INFO: Pod "pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac" satisfied condition "success or failure" Jun 7 21:44:22.941: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac container secret-volume-test: STEP: delete the pod Jun 7 21:44:22.973: INFO: Waiting for pod pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac to disappear Jun 7 21:44:22.988: INFO: Pod pod-secrets-e53fe0ed-c2c4-4e82-9422-1679648dbbac no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:44:22.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9567" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:44:22.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jun 7 21:44:23.123: INFO: Waiting up to 5m0s for pod "pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3" in namespace "emptydir-3701" to be "success or failure" Jun 7 21:44:23.126: INFO: Pod "pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.406917ms Jun 7 21:44:25.131: INFO: Pod "pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00838391s Jun 7 21:44:27.135: INFO: Pod "pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011823799s STEP: Saw pod success Jun 7 21:44:27.135: INFO: Pod "pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3" satisfied condition "success or failure" Jun 7 21:44:27.137: INFO: Trying to get logs from node jerma-worker pod pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3 container test-container: STEP: delete the pod Jun 7 21:44:27.191: INFO: Waiting for pod pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3 to disappear Jun 7 21:44:27.198: INFO: Pod pod-461eaf58-cc8c-4bdd-b7ae-fd674bd19ea3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:44:27.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3701" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2471,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:44:27.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-4d774ba8-e50a-431f-b599-5fe1d1381728 STEP: Creating a pod to test consume configMaps Jun 7 21:44:27.272: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9" in namespace "projected-2668" to be "success or failure" Jun 7 21:44:27.276: INFO: Pod "pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.283665ms Jun 7 21:44:29.280: INFO: Pod "pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007531222s Jun 7 21:44:31.284: INFO: Pod "pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01146669s STEP: Saw pod success Jun 7 21:44:31.284: INFO: Pod "pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9" satisfied condition "success or failure" Jun 7 21:44:31.287: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9 container projected-configmap-volume-test: STEP: delete the pod Jun 7 21:44:31.451: INFO: Waiting for pod pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9 to disappear Jun 7 21:44:31.510: INFO: Pod pod-projected-configmaps-878f1a89-87a6-4694-ab96-be12a241c6d9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:44:31.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2668" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:44:31.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 7 21:44:31.693: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4592 /api/v1/namespaces/watch-4592/configmaps/e2e-watch-test-watch-closed 5c0f5230-a624-454f-ac25-0664b9345fc3 22534605 0 2020-06-07 21:44:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 7 21:44:31.693: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4592 /api/v1/namespaces/watch-4592/configmaps/e2e-watch-test-watch-closed 5c0f5230-a624-454f-ac25-0664b9345fc3 22534607 0 2020-06-07 21:44:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 7 21:44:31.708: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4592 /api/v1/namespaces/watch-4592/configmaps/e2e-watch-test-watch-closed 5c0f5230-a624-454f-ac25-0664b9345fc3 22534608 0 2020-06-07 21:44:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 7 21:44:31.708: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4592 /api/v1/namespaces/watch-4592/configmaps/e2e-watch-test-watch-closed 5c0f5230-a624-454f-ac25-0664b9345fc3 22534610 0 2020-06-07 21:44:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:44:31.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4592" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":151,"skipped":2515,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:44:31.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 7 21:44:31.856: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:31.869: INFO: Number of nodes with available pods: 0 Jun 7 21:44:31.869: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:32.873: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:32.876: INFO: Number of nodes with available pods: 0 Jun 7 21:44:32.876: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:33.982: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:34.019: INFO: Number of nodes with available pods: 0 Jun 7 21:44:34.019: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:34.874: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:34.877: INFO: Number of nodes with available pods: 0 Jun 7 21:44:34.877: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:35.874: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:35.878: INFO: Number of nodes with available pods: 1 Jun 7 21:44:35.878: INFO: Node jerma-worker2 is running more than one daemon pod Jun 7 21:44:36.873: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:36.876: INFO: Number of nodes with available pods: 2 Jun 7 21:44:36.876: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 7 21:44:36.894: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:36.909: INFO: Number of nodes with available pods: 1 Jun 7 21:44:36.909: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:37.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:37.918: INFO: Number of nodes with available pods: 1 Jun 7 21:44:37.918: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:38.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:38.918: INFO: Number of nodes with available pods: 1 Jun 7 21:44:38.918: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:39.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:39.919: INFO: Number of nodes with available pods: 1 Jun 7 21:44:39.919: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:40.913: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:40.915: INFO: Number of nodes with available pods: 1 Jun 7 21:44:40.915: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:41.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:41.918: INFO: Number of nodes with available pods: 1 Jun 7 21:44:41.918: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:42.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:42.918: INFO: Number of nodes with available pods: 1 Jun 7 21:44:42.918: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:43.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:43.919: INFO: Number of nodes with available pods: 1 Jun 7 21:44:43.919: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:44.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:44.917: INFO: Number of nodes with available pods: 1 Jun 7 21:44:44.917: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:45.919: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:45.922: INFO: Number of nodes with available pods: 1 Jun 7 21:44:45.922: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:46.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:46.918: INFO: Number of nodes with available pods: 1 Jun 7 21:44:46.918: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:47.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:47.919: INFO: Number of nodes with available pods: 1 Jun 7 21:44:47.919: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:48.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:48.920: INFO: Number of nodes with available pods: 1 Jun 7 21:44:48.920: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:49.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:49.918: INFO: Number of nodes with available pods: 1 Jun 7 21:44:49.919: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:50.938: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:51.115: INFO: Number of nodes with available pods: 1 Jun 7 21:44:51.115: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:51.913: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:51.933: INFO: Number of nodes with available pods: 1 Jun 7 21:44:51.933: INFO: Node jerma-worker is running more than one daemon pod Jun 7 21:44:52.913: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 21:44:52.915: INFO: Number of nodes with available pods: 2 Jun 7 21:44:52.916: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-561, will wait for the garbage collector to delete the pods Jun 7 21:44:52.976: INFO: Deleting DaemonSet.extensions daemon-set took: 5.953123ms Jun 7 21:44:53.276: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.23127ms Jun 7 21:44:59.579: INFO: Number of nodes with available pods: 0 Jun 7 21:44:59.579: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 21:44:59.581: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-561/daemonsets","resourceVersion":"22534765"},"items":null} Jun 7 21:44:59.584: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-561/pods","resourceVersion":"22534765"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:44:59.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-561" for this suite. • [SLOW TEST:27.881 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":152,"skipped":2523,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:44:59.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 7 21:44:59.662: INFO: Waiting up to 5m0s for pod "pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806" in namespace "emptydir-1000" to be "success or failure" Jun 7 21:44:59.665: INFO: Pod "pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806": Phase="Pending", Reason="", readiness=false. Elapsed: 3.434328ms Jun 7 21:45:01.670: INFO: Pod "pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007781749s Jun 7 21:45:03.674: INFO: Pod "pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012217364s STEP: Saw pod success Jun 7 21:45:03.674: INFO: Pod "pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806" satisfied condition "success or failure" Jun 7 21:45:03.677: INFO: Trying to get logs from node jerma-worker pod pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806 container test-container: STEP: delete the pod Jun 7 21:45:03.696: INFO: Waiting for pod pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806 to disappear Jun 7 21:45:03.701: INFO: Pod pod-ba64aa5e-45f8-40c6-a5c2-3e4287a09806 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:45:03.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1000" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2524,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:45:03.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:45:20.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6670" for this suite. • [SLOW TEST:16.627 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":154,"skipped":2537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:45:20.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:45:20.436: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 7 21:45:25.440: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 7 21:45:25.440: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 7 21:45:25.493: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5692 /apis/apps/v1/namespaces/deployment-5692/deployments/test-cleanup-deployment 6b8dcb70-6ef6-4ee7-9fd6-fbb281ad9882 22534947 1 2020-06-07 21:45:25 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c86698 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 7 21:45:25.572: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-5692 /apis/apps/v1/namespaces/deployment-5692/replicasets/test-cleanup-deployment-55ffc6b7b6 b0380b0b-9efe-40c2-8a87-31804ce1a318 22534953 1 2020-06-07 21:45:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6b8dcb70-6ef6-4ee7-9fd6-fbb281ad9882 0xc001c86d27 0xc001c86d28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c86da8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 7 21:45:25.572: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 7 21:45:25.572: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5692 /apis/apps/v1/namespaces/deployment-5692/replicasets/test-cleanup-controller 2da1e5cb-45e6-4418-9a88-99b150b66014 22534948 1 2020-06-07 21:45:20 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 6b8dcb70-6ef6-4ee7-9fd6-fbb281ad9882 0xc001c86c2f 0xc001c86c40}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001c86ca8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 7 21:45:25.625: INFO: Pod "test-cleanup-controller-7k467" is available: &Pod{ObjectMeta:{test-cleanup-controller-7k467 test-cleanup-controller- deployment-5692 /api/v1/namespaces/deployment-5692/pods/test-cleanup-controller-7k467 88f34d0a-f49f-4c9f-b832-fa4c01a2194d 22534936 0 2020-06-07 21:45:20 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 2da1e5cb-45e6-4418-9a88-99b150b66014 0xc0034b1397 0xc0034b1398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fktjx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fktjx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fktjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:45:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:45:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:45:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:45:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.50,StartTime:2020-06-07 21:45:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 21:45:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e62beffca338b3fe7c2fe39ac702cb90f3038ba7977bbe58ca5a96911b841ea7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 21:45:25.626: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-4txjt" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-4txjt test-cleanup-deployment-55ffc6b7b6- deployment-5692 /api/v1/namespaces/deployment-5692/pods/test-cleanup-deployment-55ffc6b7b6-4txjt b410228f-64e4-4df7-85f8-79820517cfd5 22534955 0 2020-06-07 21:45:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 b0380b0b-9efe-40c2-8a87-31804ce1a318 0xc0034b1527 0xc0034b1528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fktjx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fktjx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fktjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:45:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:45:25.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5692" for this suite. • [SLOW TEST:5.307 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":155,"skipped":2570,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:45:25.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jun 7 21:45:25.764: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:45:41.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8811" for this suite. • [SLOW TEST:16.221 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":156,"skipped":2570,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:45:41.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-a0610974-6c39-4ee3-9344-db6f45814583 in namespace container-probe-3081 Jun 7 21:45:46.063: INFO: Started pod test-webserver-a0610974-6c39-4ee3-9344-db6f45814583 in namespace container-probe-3081 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 21:45:46.066: INFO: Initial restart count of pod test-webserver-a0610974-6c39-4ee3-9344-db6f45814583 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:49:46.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3081" for this suite. • [SLOW TEST:244.912 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2573,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:49:46.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 21:49:47.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467" in namespace "projected-4186" to be "success or failure" Jun 7 21:49:47.126: INFO: Pod "downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467": Phase="Pending", Reason="", readiness=false. Elapsed: 41.811825ms Jun 7 21:49:49.130: INFO: Pod "downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045804584s Jun 7 21:49:51.134: INFO: Pod "downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049775838s STEP: Saw pod success Jun 7 21:49:51.134: INFO: Pod "downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467" satisfied condition "success or failure" Jun 7 21:49:51.138: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467 container client-container: STEP: delete the pod Jun 7 21:49:51.265: INFO: Waiting for pod downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467 to disappear Jun 7 21:49:51.270: INFO: Pod downwardapi-volume-4380d532-7193-4f3c-b162-2d216d116467 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:49:51.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4186" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:49:51.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee in namespace container-probe-6579 Jun 7 21:49:55.360: INFO: Started pod liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee in namespace container-probe-6579 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 21:49:55.363: INFO: Initial restart count of pod liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee is 0 Jun 7 21:50:15.427: INFO: Restart count of pod container-probe-6579/liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee is now 1 (20.063262524s elapsed) Jun 7 21:50:35.513: INFO: Restart count of pod container-probe-6579/liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee is now 2 (40.150190318s elapsed) Jun 7 21:50:55.584: INFO: Restart count of pod container-probe-6579/liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee is now 3 (1m0.220823599s elapsed) Jun 7 21:51:15.674: INFO: Restart count of pod container-probe-6579/liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee is now 4 (1m20.310645453s elapsed) Jun 7 21:52:16.040: INFO: Restart count of pod container-probe-6579/liveness-bed5dfd2-57ca-433c-9921-07c52137f0ee is now 5 (2m20.677056499s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:52:16.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6579" for this suite. • [SLOW TEST:144.838 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:52:16.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9899 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 7 21:52:16.157: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 7 21:52:38.492: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.52 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9899 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:52:38.492: INFO: >>> kubeConfig: /root/.kube/config I0607 21:52:38.527168 6 log.go:172] (0xc00090a2c0) (0xc00144c820) Create stream I0607 21:52:38.527199 6 log.go:172] (0xc00090a2c0) (0xc00144c820) Stream added, broadcasting: 1 I0607 21:52:38.528988 6 log.go:172] (0xc00090a2c0) Reply frame received for 1 I0607 21:52:38.529024 6 log.go:172] (0xc00090a2c0) (0xc0021f4d20) Create stream I0607 21:52:38.529037 6 log.go:172] (0xc00090a2c0) (0xc0021f4d20) Stream added, broadcasting: 3 I0607 21:52:38.530553 6 log.go:172] (0xc00090a2c0) Reply frame received for 3 I0607 21:52:38.530601 6 log.go:172] (0xc00090a2c0) (0xc0014195e0) Create stream I0607 21:52:38.530619 6 log.go:172] (0xc00090a2c0) (0xc0014195e0) Stream added, broadcasting: 5 I0607 21:52:38.531518 6 log.go:172] (0xc00090a2c0) Reply frame received for 5 I0607 21:52:39.605340 6 log.go:172] (0xc00090a2c0) Data frame received for 3 I0607 21:52:39.605457 6 log.go:172] (0xc0021f4d20) (3) Data frame handling I0607 21:52:39.605542 6 log.go:172] (0xc0021f4d20) (3) Data frame sent I0607 21:52:39.605587 6 log.go:172] (0xc00090a2c0) Data frame received for 3 I0607 21:52:39.605638 6 log.go:172] (0xc0021f4d20) (3) Data frame handling I0607 21:52:39.605764 6 log.go:172] (0xc00090a2c0) Data frame received for 5 I0607 21:52:39.605801 6 log.go:172] (0xc0014195e0) (5) Data frame handling I0607 21:52:39.608235 6 log.go:172] (0xc00090a2c0) Data frame received for 1 I0607 21:52:39.608271 6 log.go:172] (0xc00144c820) (1) Data frame handling I0607 21:52:39.608307 6 log.go:172] (0xc00144c820) (1) Data frame sent I0607 21:52:39.608333 6 log.go:172] (0xc00090a2c0) (0xc00144c820) Stream removed, broadcasting: 1 I0607 21:52:39.608466 6 log.go:172] (0xc00090a2c0) (0xc00144c820) Stream removed, broadcasting: 1 I0607 21:52:39.608593 6 log.go:172] (0xc00090a2c0) (0xc0021f4d20) Stream removed, broadcasting: 3 I0607 21:52:39.608628 6 log.go:172] (0xc00090a2c0) (0xc0014195e0) Stream removed, broadcasting: 5 Jun 7 21:52:39.608: INFO: Found all expected endpoints: [netserver-0] I0607 21:52:39.608853 6 log.go:172] (0xc00090a2c0) Go away received Jun 7 21:52:39.612: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.200 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9899 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 7 21:52:39.612: INFO: >>> kubeConfig: /root/.kube/config I0607 21:52:39.644294 6 log.go:172] (0xc00090a840) (0xc00144cb40) Create stream I0607 21:52:39.644322 6 log.go:172] (0xc00090a840) (0xc00144cb40) Stream added, broadcasting: 1 I0607 21:52:39.646285 6 log.go:172] (0xc00090a840) Reply frame received for 1 I0607 21:52:39.646304 6 log.go:172] (0xc00090a840) (0xc001419720) Create stream I0607 21:52:39.646310 6 log.go:172] (0xc00090a840) (0xc001419720) Stream added, broadcasting: 3 I0607 21:52:39.647096 6 log.go:172] (0xc00090a840) Reply frame received for 3 I0607 21:52:39.647124 6 log.go:172] (0xc00090a840) (0xc001ee8f00) Create stream I0607 21:52:39.647135 6 log.go:172] (0xc00090a840) (0xc001ee8f00) Stream added, broadcasting: 5 I0607 21:52:39.648023 6 log.go:172] (0xc00090a840) Reply frame received for 5 I0607 21:52:40.738678 6 log.go:172] (0xc00090a840) Data frame received for 3 I0607 21:52:40.738712 6 log.go:172] (0xc001419720) (3) Data frame handling I0607 21:52:40.738735 6 log.go:172] (0xc001419720) (3) Data frame sent I0607 21:52:40.738749 6 log.go:172] (0xc00090a840) Data frame received for 3 I0607 21:52:40.738762 6 log.go:172] (0xc001419720) (3) Data frame handling I0607 21:52:40.739035 6 log.go:172] (0xc00090a840) Data frame received for 5 I0607 21:52:40.739054 6 log.go:172] (0xc001ee8f00) (5) Data frame handling I0607 21:52:40.740789 6 log.go:172] (0xc00090a840) Data frame received for 1 I0607 21:52:40.740823 6 log.go:172] (0xc00144cb40) (1) Data frame handling I0607 21:52:40.740844 6 log.go:172] (0xc00144cb40) (1) Data frame sent I0607 21:52:40.740862 6 log.go:172] (0xc00090a840) (0xc00144cb40) Stream removed, broadcasting: 1 I0607 21:52:40.740882 6 log.go:172] (0xc00090a840) Go away received I0607 21:52:40.741033 6 log.go:172] (0xc00090a840) (0xc00144cb40) Stream removed, broadcasting: 1 I0607 21:52:40.741060 6 log.go:172] (0xc00090a840) (0xc001419720) Stream removed, broadcasting: 3 I0607 21:52:40.741075 6 log.go:172] (0xc00090a840) (0xc001ee8f00) Stream removed, broadcasting: 5 Jun 7 21:52:40.741: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:52:40.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9899" for this suite. • [SLOW TEST:24.636 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2693,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:52:40.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-cf32ddf1-cb60-4a5c-94c3-dd9503ff6848 STEP: Creating a pod to test consume secrets Jun 7 21:52:40.852: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89" in namespace "projected-1804" to be "success or failure" Jun 7 21:52:40.870: INFO: Pod "pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89": Phase="Pending", Reason="", readiness=false. Elapsed: 18.258884ms Jun 7 21:52:42.874: INFO: Pod "pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022285902s Jun 7 21:52:45.395: INFO: Pod "pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.543134868s STEP: Saw pod success Jun 7 21:52:45.395: INFO: Pod "pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89" satisfied condition "success or failure" Jun 7 21:52:45.398: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89 container secret-volume-test: STEP: delete the pod Jun 7 21:52:45.492: INFO: Waiting for pod pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89 to disappear Jun 7 21:52:45.532: INFO: Pod pod-projected-secrets-e97a43de-d872-473c-9c2c-95ce4f01fd89 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:52:45.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1804" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2694,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:52:45.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-fef5afe9-dcac-459a-a4ac-f46b0ab8069e STEP: Creating a pod to test consume secrets Jun 7 21:52:45.714: INFO: Waiting up to 5m0s for pod "pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6" in namespace "secrets-6136" to be "success or failure" Jun 7 21:52:45.736: INFO: Pod "pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.387482ms Jun 7 21:52:47.793: INFO: Pod "pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079281645s Jun 7 21:52:49.940: INFO: Pod "pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225885867s Jun 7 21:52:51.944: INFO: Pod "pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230727068s STEP: Saw pod success Jun 7 21:52:51.944: INFO: Pod "pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6" satisfied condition "success or failure" Jun 7 21:52:51.947: INFO: Trying to get logs from node jerma-worker pod pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6 container secret-volume-test: STEP: delete the pod Jun 7 21:52:52.011: INFO: Waiting for pod pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6 to disappear Jun 7 21:52:52.020: INFO: Pod pod-secrets-adcfc09b-6583-47a2-a109-2be50ed725a6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:52:52.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6136" for this suite. • [SLOW TEST:6.487 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:52:52.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:53:03.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-650" for this suite. • [SLOW TEST:11.233 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":163,"skipped":2729,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:53:03.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:53:08.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3096" for this suite. • [SLOW TEST:5.302 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":164,"skipped":2731,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:53:08.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 7 21:53:09.163: INFO: Pod name wrapped-volume-race-bb35de93-1e05-4132-a364-921f77c3669b: Found 0 pods out of 5 Jun 7 21:53:14.565: INFO: Pod name wrapped-volume-race-bb35de93-1e05-4132-a364-921f77c3669b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bb35de93-1e05-4132-a364-921f77c3669b in namespace emptydir-wrapper-3615, will wait for the garbage collector to delete the pods Jun 7 21:53:28.815: INFO: Deleting ReplicationController wrapped-volume-race-bb35de93-1e05-4132-a364-921f77c3669b took: 28.208462ms Jun 7 21:53:29.115: INFO: Terminating ReplicationController wrapped-volume-race-bb35de93-1e05-4132-a364-921f77c3669b pods took: 300.28436ms STEP: Creating RC which spawns configmap-volume pods Jun 7 21:53:40.543: INFO: Pod name wrapped-volume-race-ab5084d6-c827-4531-b676-5c034a3ea9b0: Found 0 pods out of 5 Jun 7 21:53:45.555: INFO: Pod name wrapped-volume-race-ab5084d6-c827-4531-b676-5c034a3ea9b0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ab5084d6-c827-4531-b676-5c034a3ea9b0 in namespace emptydir-wrapper-3615, will wait for the garbage collector to delete the pods Jun 7 21:53:59.636: INFO: Deleting ReplicationController wrapped-volume-race-ab5084d6-c827-4531-b676-5c034a3ea9b0 took: 7.835492ms Jun 7 21:54:00.037: INFO: Terminating ReplicationController wrapped-volume-race-ab5084d6-c827-4531-b676-5c034a3ea9b0 pods took: 400.285467ms STEP: Creating RC which spawns configmap-volume pods Jun 7 21:54:10.306: INFO: Pod name wrapped-volume-race-9d2fa170-8ce6-48f4-9e7f-9cda3f1efe34: Found 0 pods out of 5 Jun 7 21:54:15.313: INFO: Pod name wrapped-volume-race-9d2fa170-8ce6-48f4-9e7f-9cda3f1efe34: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9d2fa170-8ce6-48f4-9e7f-9cda3f1efe34 in namespace emptydir-wrapper-3615, will wait for the garbage collector to delete the pods Jun 7 21:54:29.398: INFO: Deleting ReplicationController wrapped-volume-race-9d2fa170-8ce6-48f4-9e7f-9cda3f1efe34 took: 7.444455ms Jun 7 21:54:29.698: INFO: Terminating ReplicationController wrapped-volume-race-9d2fa170-8ce6-48f4-9e7f-9cda3f1efe34 pods took: 300.296217ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:54:40.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3615" for this suite. • [SLOW TEST:91.923 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":165,"skipped":2746,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:54:40.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e Jun 7 21:54:40.665: INFO: Pod name my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e: Found 0 pods out of 1 Jun 7 21:54:45.679: INFO: Pod name my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e: Found 1 pods out of 1 Jun 7 21:54:45.679: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e" are running Jun 7 21:54:45.749: INFO: Pod "my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e-sr5hl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:54:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:54:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-07 21:54:40 +0000 UTC Reason: Message:}]) Jun 7 21:54:45.750: INFO: Trying to dial the pod Jun 7 21:54:50.762: INFO: Controller my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e: Got expected result from replica 1 [my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e-sr5hl]: "my-hostname-basic-dde7b385-bb1c-42e5-aff0-8465d1e6e03e-sr5hl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:54:50.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8474" for this suite. • [SLOW TEST:10.285 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":166,"skipped":2751,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:54:50.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 7 21:54:51.647: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 7 21:54:53.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163691, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163691, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163691, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163691, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:54:56.700: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:54:56.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:54:57.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2650" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.150 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":167,"skipped":2753,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:54:57.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:54:58.381: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 78.768468ms) Jun 7 21:54:58.386: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.348135ms) Jun 7 21:54:58.391: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.734288ms) Jun 7 21:54:58.394: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.879935ms) Jun 7 21:54:58.396: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.326144ms) Jun 7 21:54:58.399: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.44185ms) Jun 7 21:54:58.401: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.667913ms) Jun 7 21:54:58.404: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.361216ms) Jun 7 21:54:58.415: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 11.239081ms) Jun 7 21:54:58.434: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 18.739162ms) Jun 7 21:54:58.438: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.690435ms) Jun 7 21:54:58.441: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.926939ms) Jun 7 21:54:58.444: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.228683ms) Jun 7 21:54:58.447: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.838357ms) Jun 7 21:54:58.450: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.781588ms) Jun 7 21:54:58.453: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.779476ms) Jun 7 21:54:58.456: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.374842ms) Jun 7 21:54:58.459: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.796065ms) Jun 7 21:54:58.461: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.958707ms) Jun 7 21:54:58.464: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.65778ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:54:58.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3273" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":168,"skipped":2758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:54:58.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 7 21:55:02.849: INFO: &Pod{ObjectMeta:{send-events-ddab24bb-0872-4a7b-a5d7-57e5dc890f39 events-8503 /api/v1/namespaces/events-8503/pods/send-events-ddab24bb-0872-4a7b-a5d7-57e5dc890f39 394a11c7-5c68-41d2-9a0f-33785a71613d 22537886 0 2020-06-07 21:54:58 +0000 UTC map[name:foo time:788381056] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r2wc2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r2wc2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r2wc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 21:54:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.62,StartTime:2020-06-07 21:54:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 21:55:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b2273a9cf810f1a7aa088d4bf10f0491cbba5471c2ec27e4215ac9041efb2134,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 7 21:55:04.854: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 7 21:55:06.859: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:55:06.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8503" for this suite. • [SLOW TEST:8.167 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":169,"skipped":2794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:55:06.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0607 21:55:37.495787 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 21:55:37.495: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:55:37.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3601" for this suite. • [SLOW TEST:30.615 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":170,"skipped":2820,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:55:37.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 21:55:43.616: INFO: DNS probes using dns-test-2b020811-2b14-4a07-8d31-9bb6dc8ca5e9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 21:55:49.985: INFO: File wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:55:49.990: INFO: File jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:55:49.990: INFO: Lookups using dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a failed for: [wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local] Jun 7 21:55:54.995: INFO: File wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:55:54.998: INFO: File jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:55:54.998: INFO: Lookups using dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a failed for: [wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local] Jun 7 21:55:59.995: INFO: File wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:55:59.999: INFO: File jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:55:59.999: INFO: Lookups using dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a failed for: [wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local] Jun 7 21:56:04.995: INFO: File wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:56:04.998: INFO: File jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:56:04.998: INFO: Lookups using dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a failed for: [wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local] Jun 7 21:56:09.996: INFO: File wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:56:10.000: INFO: File jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local from pod dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 7 21:56:10.000: INFO: Lookups using dns-7980/dns-test-2d856867-2612-4580-a041-301786fd627a failed for: [wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local] Jun 7 21:56:14.997: INFO: DNS probes using dns-test-2d856867-2612-4580-a041-301786fd627a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7980.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7980.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7980.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7980.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 21:56:23.719: INFO: DNS probes using dns-test-a0b4a3d1-7282-48b8-955a-1c2762be63ea succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:56:23.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7980" for this suite. • [SLOW TEST:46.303 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":171,"skipped":2825,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:56:23.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jun 7 21:56:24.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2393' Jun 7 21:56:28.255: INFO: stderr: "" Jun 7 21:56:28.255: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 21:56:28.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:28.352: INFO: stderr: "" Jun 7 21:56:28.352: INFO: stdout: "update-demo-nautilus-qvt5f update-demo-nautilus-rw84z " Jun 7 21:56:28.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvt5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:28.436: INFO: stderr: "" Jun 7 21:56:28.436: INFO: stdout: "" Jun 7 21:56:28.436: INFO: update-demo-nautilus-qvt5f is created but not running Jun 7 21:56:33.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:33.534: INFO: stderr: "" Jun 7 21:56:33.535: INFO: stdout: "update-demo-nautilus-qvt5f update-demo-nautilus-rw84z " Jun 7 21:56:33.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvt5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:33.628: INFO: stderr: "" Jun 7 21:56:33.628: INFO: stdout: "true" Jun 7 21:56:33.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvt5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:33.730: INFO: stderr: "" Jun 7 21:56:33.730: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:56:33.730: INFO: validating pod update-demo-nautilus-qvt5f Jun 7 21:56:33.734: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:56:33.734: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:56:33.734: INFO: update-demo-nautilus-qvt5f is verified up and running Jun 7 21:56:33.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:33.827: INFO: stderr: "" Jun 7 21:56:33.827: INFO: stdout: "true" Jun 7 21:56:33.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:33.921: INFO: stderr: "" Jun 7 21:56:33.921: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:56:33.921: INFO: validating pod update-demo-nautilus-rw84z Jun 7 21:56:33.943: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:56:33.943: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:56:33.943: INFO: update-demo-nautilus-rw84z is verified up and running STEP: scaling down the replication controller Jun 7 21:56:33.945: INFO: scanned /root for discovery docs: Jun 7 21:56:33.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2393' Jun 7 21:56:35.071: INFO: stderr: "" Jun 7 21:56:35.071: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 21:56:35.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:35.167: INFO: stderr: "" Jun 7 21:56:35.167: INFO: stdout: "update-demo-nautilus-qvt5f update-demo-nautilus-rw84z " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 7 21:56:40.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:40.266: INFO: stderr: "" Jun 7 21:56:40.266: INFO: stdout: "update-demo-nautilus-qvt5f update-demo-nautilus-rw84z " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 7 21:56:45.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:45.394: INFO: stderr: "" Jun 7 21:56:45.394: INFO: stdout: "update-demo-nautilus-qvt5f update-demo-nautilus-rw84z " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 7 21:56:50.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:50.486: INFO: stderr: "" Jun 7 21:56:50.486: INFO: stdout: "update-demo-nautilus-rw84z " Jun 7 21:56:50.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:50.581: INFO: stderr: "" Jun 7 21:56:50.581: INFO: stdout: "true" Jun 7 21:56:50.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:50.672: INFO: stderr: "" Jun 7 21:56:50.672: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:56:50.672: INFO: validating pod update-demo-nautilus-rw84z Jun 7 21:56:50.675: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:56:50.675: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:56:50.675: INFO: update-demo-nautilus-rw84z is verified up and running STEP: scaling up the replication controller Jun 7 21:56:50.677: INFO: scanned /root for discovery docs: Jun 7 21:56:50.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2393' Jun 7 21:56:51.796: INFO: stderr: "" Jun 7 21:56:51.796: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 7 21:56:51.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:51.893: INFO: stderr: "" Jun 7 21:56:51.893: INFO: stdout: "update-demo-nautilus-rw84z update-demo-nautilus-tdslw " Jun 7 21:56:51.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:52.040: INFO: stderr: "" Jun 7 21:56:52.040: INFO: stdout: "true" Jun 7 21:56:52.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:52.143: INFO: stderr: "" Jun 7 21:56:52.143: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:56:52.143: INFO: validating pod update-demo-nautilus-rw84z Jun 7 21:56:52.147: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:56:52.147: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:56:52.147: INFO: update-demo-nautilus-rw84z is verified up and running Jun 7 21:56:52.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tdslw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:52.252: INFO: stderr: "" Jun 7 21:56:52.252: INFO: stdout: "" Jun 7 21:56:52.252: INFO: update-demo-nautilus-tdslw is created but not running Jun 7 21:56:57.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2393' Jun 7 21:56:57.356: INFO: stderr: "" Jun 7 21:56:57.356: INFO: stdout: "update-demo-nautilus-rw84z update-demo-nautilus-tdslw " Jun 7 21:56:57.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:57.444: INFO: stderr: "" Jun 7 21:56:57.444: INFO: stdout: "true" Jun 7 21:56:57.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw84z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:57.530: INFO: stderr: "" Jun 7 21:56:57.530: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:56:57.530: INFO: validating pod update-demo-nautilus-rw84z Jun 7 21:56:57.533: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:56:57.533: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:56:57.533: INFO: update-demo-nautilus-rw84z is verified up and running Jun 7 21:56:57.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tdslw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:57.648: INFO: stderr: "" Jun 7 21:56:57.648: INFO: stdout: "true" Jun 7 21:56:57.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tdslw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2393' Jun 7 21:56:57.733: INFO: stderr: "" Jun 7 21:56:57.733: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 7 21:56:57.733: INFO: validating pod update-demo-nautilus-tdslw Jun 7 21:56:57.738: INFO: got data: { "image": "nautilus.jpg" } Jun 7 21:56:57.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 7 21:56:57.738: INFO: update-demo-nautilus-tdslw is verified up and running STEP: using delete to clean up resources Jun 7 21:56:57.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2393' Jun 7 21:56:57.826: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:56:57.826: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 7 21:56:57.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2393' Jun 7 21:56:57.918: INFO: stderr: "No resources found in kubectl-2393 namespace.\n" Jun 7 21:56:57.919: INFO: stdout: "" Jun 7 21:56:57.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2393 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 21:56:58.050: INFO: stderr: "" Jun 7 21:56:58.050: INFO: stdout: "update-demo-nautilus-rw84z\nupdate-demo-nautilus-tdslw\n" Jun 7 21:56:58.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2393' Jun 7 21:56:58.644: INFO: stderr: "No resources found in kubectl-2393 namespace.\n" Jun 7 21:56:58.644: INFO: stdout: "" Jun 7 21:56:58.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2393 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 21:56:58.746: INFO: stderr: "" Jun 7 21:56:58.746: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:56:58.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2393" for this suite. • [SLOW TEST:34.946 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":172,"skipped":2844,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:56:58.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 7 21:56:59.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1065' Jun 7 21:56:59.331: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 7 21:56:59.331: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Jun 7 21:56:59.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1065' Jun 7 21:56:59.513: INFO: stderr: "" Jun 7 21:56:59.513: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:56:59.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1065" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":173,"skipped":2846,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:56:59.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 7 21:56:59.703: INFO: Waiting up to 5m0s for pod "pod-0a76bf15-0625-4734-b161-abf8da93c80c" in namespace "emptydir-6798" to be "success or failure" Jun 7 21:56:59.712: INFO: Pod "pod-0a76bf15-0625-4734-b161-abf8da93c80c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.248031ms Jun 7 21:57:01.810: INFO: Pod "pod-0a76bf15-0625-4734-b161-abf8da93c80c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107013036s Jun 7 21:57:03.820: INFO: Pod "pod-0a76bf15-0625-4734-b161-abf8da93c80c": Phase="Running", Reason="", readiness=true. Elapsed: 4.117268887s Jun 7 21:57:05.824: INFO: Pod "pod-0a76bf15-0625-4734-b161-abf8da93c80c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121328394s STEP: Saw pod success Jun 7 21:57:05.824: INFO: Pod "pod-0a76bf15-0625-4734-b161-abf8da93c80c" satisfied condition "success or failure" Jun 7 21:57:05.828: INFO: Trying to get logs from node jerma-worker pod pod-0a76bf15-0625-4734-b161-abf8da93c80c container test-container: STEP: delete the pod Jun 7 21:57:05.858: INFO: Waiting for pod pod-0a76bf15-0625-4734-b161-abf8da93c80c to disappear Jun 7 21:57:05.873: INFO: Pod pod-0a76bf15-0625-4734-b161-abf8da93c80c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:57:05.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6798" for this suite. • [SLOW TEST:6.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2853,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:57:05.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 21:57:06.532: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 21:57:08.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163826, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163826, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727163826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 21:57:11.638: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 21:57:11.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:57:12.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2089" for this suite. STEP: Destroying namespace "webhook-2089-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.118 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":175,"skipped":2860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:57:12.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Jun 7 21:57:13.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8418' Jun 7 21:57:13.588: INFO: stderr: "" Jun 7 21:57:13.588: INFO: stdout: "pod/pause created\n" Jun 7 21:57:13.588: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 7 21:57:13.588: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8418" to be "running and ready" Jun 7 21:57:13.611: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 22.844276ms Jun 7 21:57:15.614: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026469932s Jun 7 21:57:17.618: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.029555446s Jun 7 21:57:17.618: INFO: Pod "pause" satisfied condition "running and ready" Jun 7 21:57:17.618: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jun 7 21:57:17.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8418' Jun 7 21:57:17.716: INFO: stderr: "" Jun 7 21:57:17.717: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 7 21:57:17.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8418' Jun 7 21:57:17.803: INFO: stderr: "" Jun 7 21:57:17.803: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 7 21:57:17.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8418' Jun 7 21:57:17.909: INFO: stderr: "" Jun 7 21:57:17.909: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 7 21:57:17.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8418' Jun 7 21:57:18.007: INFO: stderr: "" Jun 7 21:57:18.007: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Jun 7 21:57:18.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8418' Jun 7 21:57:18.130: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 7 21:57:18.130: INFO: stdout: "pod \"pause\" force deleted\n" Jun 7 21:57:18.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8418' Jun 7 21:57:18.458: INFO: stderr: "No resources found in kubectl-8418 namespace.\n" Jun 7 21:57:18.458: INFO: stdout: "" Jun 7 21:57:18.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8418 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 7 21:57:18.557: INFO: stderr: "" Jun 7 21:57:18.557: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:57:18.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8418" for this suite. • [SLOW TEST:5.567 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":176,"skipped":2884,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:57:18.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-685k STEP: Creating a pod to test atomic-volume-subpath Jun 7 21:57:18.624: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-685k" in namespace "subpath-727" to be "success or failure" Jun 7 21:57:18.629: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434385ms Jun 7 21:57:20.633: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009019927s Jun 7 21:57:22.637: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 4.012913301s Jun 7 21:57:24.644: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 6.019568369s Jun 7 21:57:26.648: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 8.024185232s Jun 7 21:57:28.652: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 10.027766425s Jun 7 21:57:30.657: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 12.032930797s Jun 7 21:57:32.662: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 14.037925099s Jun 7 21:57:34.667: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 16.042601165s Jun 7 21:57:36.671: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 18.047260304s Jun 7 21:57:38.676: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 20.051876964s Jun 7 21:57:40.681: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Running", Reason="", readiness=true. Elapsed: 22.056968673s Jun 7 21:57:42.685: INFO: Pod "pod-subpath-test-configmap-685k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061266824s STEP: Saw pod success Jun 7 21:57:42.685: INFO: Pod "pod-subpath-test-configmap-685k" satisfied condition "success or failure" Jun 7 21:57:42.689: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-685k container test-container-subpath-configmap-685k: STEP: delete the pod Jun 7 21:57:42.742: INFO: Waiting for pod pod-subpath-test-configmap-685k to disappear Jun 7 21:57:42.749: INFO: Pod pod-subpath-test-configmap-685k no longer exists STEP: Deleting pod pod-subpath-test-configmap-685k Jun 7 21:57:42.749: INFO: Deleting pod "pod-subpath-test-configmap-685k" in namespace "subpath-727" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 21:57:42.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-727" for this suite. • [SLOW TEST:24.193 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":177,"skipped":2918,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 21:57:42.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-621c5aa8-996f-4f75-a2a9-dfa9b08cb18d in namespace container-probe-346 Jun 7 21:57:46.864: INFO: Started pod busybox-621c5aa8-996f-4f75-a2a9-dfa9b08cb18d in namespace container-probe-346 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 21:57:46.867: INFO: Initial restart count of pod busybox-621c5aa8-996f-4f75-a2a9-dfa9b08cb18d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:01:47.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-346" for this suite. • [SLOW TEST:244.801 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:01:47.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8118 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8118 to expose endpoints map[] Jun 7 22:01:47.677: INFO: Get endpoints failed (20.817689ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 7 22:01:48.682: INFO: successfully validated that service multi-endpoint-test in namespace services-8118 exposes endpoints map[] (1.025573032s elapsed) STEP: Creating pod pod1 in namespace services-8118 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8118 to expose endpoints map[pod1:[100]] Jun 7 22:01:52.725: INFO: successfully validated that service multi-endpoint-test in namespace services-8118 exposes endpoints map[pod1:[100]] (4.03478356s elapsed) STEP: Creating pod pod2 in namespace services-8118 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8118 to expose endpoints map[pod1:[100] pod2:[101]] Jun 7 22:01:55.837: INFO: successfully validated that service multi-endpoint-test in namespace services-8118 exposes endpoints map[pod1:[100] pod2:[101]] (3.108718129s elapsed) STEP: Deleting pod pod1 in namespace services-8118 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8118 to expose endpoints map[pod2:[101]] Jun 7 22:01:57.021: INFO: successfully validated that service multi-endpoint-test in namespace services-8118 exposes endpoints map[pod2:[101]] (1.179035149s elapsed) STEP: Deleting pod pod2 in namespace services-8118 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8118 to expose endpoints map[] Jun 7 22:01:57.472: INFO: successfully validated that service multi-endpoint-test in namespace services-8118 exposes endpoints map[] (62.877657ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:01:57.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8118" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.323 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":179,"skipped":2959,"failed":0} SSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:01:57.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 7 22:02:02.701: INFO: Successfully updated pod "adopt-release-5tdt8" STEP: Checking that the Job readopts the Pod Jun 7 22:02:02.702: INFO: Waiting up to 15m0s for pod "adopt-release-5tdt8" in namespace "job-120" to be "adopted" Jun 7 22:02:02.735: INFO: Pod "adopt-release-5tdt8": Phase="Running", Reason="", readiness=true. Elapsed: 32.887576ms Jun 7 22:02:04.739: INFO: Pod "adopt-release-5tdt8": Phase="Running", Reason="", readiness=true. Elapsed: 2.037146302s Jun 7 22:02:04.739: INFO: Pod "adopt-release-5tdt8" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 7 22:02:05.248: INFO: Successfully updated pod "adopt-release-5tdt8" STEP: Checking that the Job releases the Pod Jun 7 22:02:05.249: INFO: Waiting up to 15m0s for pod "adopt-release-5tdt8" in namespace "job-120" to be "released" Jun 7 22:02:05.295: INFO: Pod "adopt-release-5tdt8": Phase="Running", Reason="", readiness=true. Elapsed: 46.250342ms Jun 7 22:02:07.299: INFO: Pod "adopt-release-5tdt8": Phase="Running", Reason="", readiness=true. Elapsed: 2.050569922s Jun 7 22:02:07.299: INFO: Pod "adopt-release-5tdt8" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:02:07.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-120" for this suite. • [SLOW TEST:9.422 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":180,"skipped":2964,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:02:07.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:02:07.511: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:02:11.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5755" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2984,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:02:11.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 22:02:11.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74" in namespace "projected-2864" to be "success or failure" Jun 7 22:02:11.965: INFO: Pod "downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.33106ms Jun 7 22:02:14.115: INFO: Pod "downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154210846s Jun 7 22:02:16.121: INFO: Pod "downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159632533s STEP: Saw pod success Jun 7 22:02:16.121: INFO: Pod "downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74" satisfied condition "success or failure" Jun 7 22:02:16.123: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74 container client-container: STEP: delete the pod Jun 7 22:02:16.174: INFO: Waiting for pod downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74 to disappear Jun 7 22:02:16.198: INFO: Pod downwardapi-volume-62dc05a4-ef6a-492f-8d07-019b1909ed74 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:02:16.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2864" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2998,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:02:16.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-727b7e00-78f3-4197-8645-67c97a84f335 in namespace container-probe-5196 Jun 7 22:02:20.333: INFO: Started pod busybox-727b7e00-78f3-4197-8645-67c97a84f335 in namespace container-probe-5196 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 22:02:20.341: INFO: Initial restart count of pod busybox-727b7e00-78f3-4197-8645-67c97a84f335 is 0 Jun 7 22:03:10.611: INFO: Restart count of pod container-probe-5196/busybox-727b7e00-78f3-4197-8645-67c97a84f335 is now 1 (50.270054946s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:03:10.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5196" for this suite. • [SLOW TEST:54.451 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3000,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:03:10.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:03:10.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 7 22:03:11.321: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-07T22:03:11Z generation:1 name:name1 resourceVersion:22540005 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1cac47ae-927c-4daf-9519-0a9b0b28ac3f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 7 22:03:21.327: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-07T22:03:21Z generation:1 name:name2 resourceVersion:22540048 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c9edf659-f557-4cf7-be12-9790ae75ee04] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 7 22:03:31.333: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-07T22:03:11Z generation:2 name:name1 resourceVersion:22540078 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1cac47ae-927c-4daf-9519-0a9b0b28ac3f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 7 22:03:41.340: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-07T22:03:21Z generation:2 name:name2 resourceVersion:22540108 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c9edf659-f557-4cf7-be12-9790ae75ee04] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 7 22:03:51.349: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-07T22:03:11Z generation:2 name:name1 resourceVersion:22540138 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1cac47ae-927c-4daf-9519-0a9b0b28ac3f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 7 22:04:01.363: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-07T22:03:21Z generation:2 name:name2 resourceVersion:22540169 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c9edf659-f557-4cf7-be12-9790ae75ee04] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:04:11.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8668" for this suite. • [SLOW TEST:61.227 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":184,"skipped":3007,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:04:11.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:04:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7354" for this suite. • [SLOW TEST:11.100 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":185,"skipped":3011,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:04:22.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 7 22:04:23.658: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 7 22:04:25.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:04:27.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164263, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 22:04:30.703: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:04:30.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:04:31.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8412" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.002 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":186,"skipped":3024,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:04:31.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 7 22:04:40.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 22:04:40.134: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 22:04:42.134: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 22:04:42.138: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 22:04:44.134: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 22:04:44.139: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 22:04:46.134: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 22:04:46.138: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 22:04:48.134: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 22:04:48.139: INFO: Pod pod-with-prestop-exec-hook still exists Jun 7 22:04:50.134: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 7 22:04:50.153: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:04:50.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3138" for this suite. • [SLOW TEST:18.201 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3039,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:04:50.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-85dcc4b0-8939-415c-ae4c-f06a93f935f0 STEP: Creating a pod to test consume secrets Jun 7 22:04:50.253: INFO: Waiting up to 5m0s for pod "pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154" in namespace "secrets-684" to be "success or failure" Jun 7 22:04:50.297: INFO: Pod "pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154": Phase="Pending", Reason="", readiness=false. Elapsed: 44.022698ms Jun 7 22:04:52.301: INFO: Pod "pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048151972s Jun 7 22:04:54.306: INFO: Pod "pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052935824s STEP: Saw pod success Jun 7 22:04:54.306: INFO: Pod "pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154" satisfied condition "success or failure" Jun 7 22:04:54.310: INFO: Trying to get logs from node jerma-worker pod pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154 container secret-volume-test: STEP: delete the pod Jun 7 22:04:54.358: INFO: Waiting for pod pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154 to disappear Jun 7 22:04:54.386: INFO: Pod pod-secrets-997d5faa-1592-48f8-b4b3-11e9e5842154 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:04:54.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-684" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3050,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:04:54.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 7 22:04:54.515: INFO: Waiting up to 5m0s for pod "pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0" in namespace "emptydir-5906" to be "success or failure" Jun 7 22:04:54.520: INFO: Pod "pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272105ms Jun 7 22:04:56.523: INFO: Pod "pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729571s Jun 7 22:04:58.527: INFO: Pod "pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011654813s STEP: Saw pod success Jun 7 22:04:58.527: INFO: Pod "pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0" satisfied condition "success or failure" Jun 7 22:04:58.530: INFO: Trying to get logs from node jerma-worker pod pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0 container test-container: STEP: delete the pod Jun 7 22:04:58.588: INFO: Waiting for pod pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0 to disappear Jun 7 22:04:58.601: INFO: Pod pod-d20388ca-c0b7-44f3-83bd-f11f0a63f7e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:04:58.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5906" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3060,"failed":0} S ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:04:58.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jun 7 22:04:58.694: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3628" to be "success or failure" Jun 7 22:04:58.698: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865058ms Jun 7 22:05:00.702: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00786737s Jun 7 22:05:02.706: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012385115s Jun 7 22:05:04.711: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017197327s STEP: Saw pod success Jun 7 22:05:04.711: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 7 22:05:04.714: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 7 22:05:04.940: INFO: Waiting for pod pod-host-path-test to disappear Jun 7 22:05:05.030: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:05:05.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3628" for this suite. • [SLOW TEST:6.443 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3061,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:05:05.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0607 22:05:15.161015 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 7 22:05:15.161: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:05:15.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9900" for this suite. • [SLOW TEST:10.098 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":191,"skipped":3085,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:05:15.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:05:15.268: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 7 22:05:15.274: INFO: Number of nodes with available pods: 0 Jun 7 22:05:15.275: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 7 22:05:15.358: INFO: Number of nodes with available pods: 0 Jun 7 22:05:15.358: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:16.418: INFO: Number of nodes with available pods: 0 Jun 7 22:05:16.418: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:17.429: INFO: Number of nodes with available pods: 0 Jun 7 22:05:17.429: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:18.381: INFO: Number of nodes with available pods: 1 Jun 7 22:05:18.381: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 7 22:05:18.418: INFO: Number of nodes with available pods: 1 Jun 7 22:05:18.418: INFO: Number of running nodes: 0, number of available pods: 1 Jun 7 22:05:19.422: INFO: Number of nodes with available pods: 0 Jun 7 22:05:19.422: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 7 22:05:19.428: INFO: Number of nodes with available pods: 0 Jun 7 22:05:19.428: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:20.681: INFO: Number of nodes with available pods: 0 Jun 7 22:05:20.682: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:21.433: INFO: Number of nodes with available pods: 0 Jun 7 22:05:21.433: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:22.432: INFO: Number of nodes with available pods: 0 Jun 7 22:05:22.432: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:23.434: INFO: Number of nodes with available pods: 0 Jun 7 22:05:23.434: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:24.432: INFO: Number of nodes with available pods: 0 Jun 7 22:05:24.433: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:25.433: INFO: Number of nodes with available pods: 0 Jun 7 22:05:25.433: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:26.432: INFO: Number of nodes with available pods: 0 Jun 7 22:05:26.432: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:27.432: INFO: Number of nodes with available pods: 0 Jun 7 22:05:27.432: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:28.433: INFO: Number of nodes with available pods: 0 Jun 7 22:05:28.433: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:29.432: INFO: Number of nodes with available pods: 0 Jun 7 22:05:29.433: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:30.433: INFO: Number of nodes with available pods: 0 Jun 7 22:05:30.433: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:31.441: INFO: Number of nodes with available pods: 0 Jun 7 22:05:31.441: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:05:32.513: INFO: Number of nodes with available pods: 1 Jun 7 22:05:32.513: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5947, will wait for the garbage collector to delete the pods Jun 7 22:05:32.579: INFO: Deleting DaemonSet.extensions daemon-set took: 6.506899ms Jun 7 22:05:32.880: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.269977ms Jun 7 22:05:37.283: INFO: Number of nodes with available pods: 0 Jun 7 22:05:37.283: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 22:05:37.286: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5947/daemonsets","resourceVersion":"22540762"},"items":null} Jun 7 22:05:37.288: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5947/pods","resourceVersion":"22540762"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:05:37.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5947" for this suite. • [SLOW TEST:22.154 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":192,"skipped":3098,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:05:37.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 7 22:05:37.390: INFO: PodSpec: initContainers in spec.initContainers Jun 7 22:06:26.023: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0d4e2570-5566-484d-9c1c-0ca83a075d78", GenerateName:"", Namespace:"init-container-2434", SelfLink:"/api/v1/namespaces/init-container-2434/pods/pod-init-0d4e2570-5566-484d-9c1c-0ca83a075d78", UID:"6dcc8a07-4f7b-4338-899e-60f63a8983b3", ResourceVersion:"22540952", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727164337, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"390588252"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nz2bq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002d88100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nz2bq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nz2bq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nz2bq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0036a6248), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00301c0c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0036a62d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0036a62f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0036a62f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0036a62fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164337, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164337, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164337, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164337, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.227", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.227"}}, StartTime:(*v1.Time)(0xc0056641e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc005664220), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002042150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b4944bba9b4e6e147f1236680a8019d1fe07b3437c8974d45531a8f249159ad4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005664240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005664200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0036a637f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:06:26.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2434" for this suite. • [SLOW TEST:48.770 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":193,"skipped":3100,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:06:26.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-v6vw STEP: Creating a pod to test atomic-volume-subpath Jun 7 22:06:26.184: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-v6vw" in namespace "subpath-9707" to be "success or failure" Jun 7 22:06:26.233: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Pending", Reason="", readiness=false. Elapsed: 49.054049ms Jun 7 22:06:28.257: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073022588s Jun 7 22:06:30.262: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 4.078006873s Jun 7 22:06:32.267: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 6.082656389s Jun 7 22:06:34.271: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 8.08681672s Jun 7 22:06:36.275: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 10.091382093s Jun 7 22:06:38.279: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 12.095345188s Jun 7 22:06:40.284: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 14.099957387s Jun 7 22:06:42.288: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 16.10393138s Jun 7 22:06:44.292: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 18.108215046s Jun 7 22:06:46.296: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 20.112417662s Jun 7 22:06:48.301: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Running", Reason="", readiness=true. Elapsed: 22.117110753s Jun 7 22:06:50.401: INFO: Pod "pod-subpath-test-configmap-v6vw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.216579115s STEP: Saw pod success Jun 7 22:06:50.401: INFO: Pod "pod-subpath-test-configmap-v6vw" satisfied condition "success or failure" Jun 7 22:06:50.404: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-v6vw container test-container-subpath-configmap-v6vw: STEP: delete the pod Jun 7 22:06:50.493: INFO: Waiting for pod pod-subpath-test-configmap-v6vw to disappear Jun 7 22:06:50.542: INFO: Pod pod-subpath-test-configmap-v6vw no longer exists STEP: Deleting pod pod-subpath-test-configmap-v6vw Jun 7 22:06:50.542: INFO: Deleting pod "pod-subpath-test-configmap-v6vw" in namespace "subpath-9707" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:06:50.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9707" for this suite. • [SLOW TEST:24.470 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":194,"skipped":3104,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:06:50.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 7 22:06:50.614: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jun 7 22:06:51.179: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 7 22:06:53.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:06:55.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164411, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:06:58.006: INFO: Waited 525.019616ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:06:58.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1121" for this suite. • [SLOW TEST:7.977 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":195,"skipped":3117,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:06:58.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 22:06:59.819: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 22:07:01.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 22:07:04.903: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:04.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3374" for this suite. STEP: Destroying namespace "webhook-3374-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.471 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":196,"skipped":3139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:05.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 7 22:07:05.110: INFO: Waiting up to 5m0s for pod "pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d" in namespace "emptydir-1571" to be "success or failure" Jun 7 22:07:05.113: INFO: Pod "pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.60311ms Jun 7 22:07:07.117: INFO: Pod "pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006684778s Jun 7 22:07:09.122: INFO: Pod "pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011121391s STEP: Saw pod success Jun 7 22:07:09.122: INFO: Pod "pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d" satisfied condition "success or failure" Jun 7 22:07:09.125: INFO: Trying to get logs from node jerma-worker pod pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d container test-container: STEP: delete the pod Jun 7 22:07:09.157: INFO: Waiting for pod pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d to disappear Jun 7 22:07:09.161: INFO: Pod pod-e2a107cf-0e22-49b3-bb72-b7529e2afd9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:09.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1571" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3277,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:09.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 7 22:07:09.211: INFO: >>> kubeConfig: /root/.kube/config Jun 7 22:07:12.126: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:22.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3461" for this suite. • [SLOW TEST:13.424 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":198,"skipped":3284,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:22.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:26.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3204" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3294,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:26.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 22:07:26.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b" in namespace "downward-api-8894" to be "success or failure" Jun 7 22:07:26.758: INFO: Pod "downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.118703ms Jun 7 22:07:28.762: INFO: Pod "downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007759657s Jun 7 22:07:30.767: INFO: Pod "downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01236919s STEP: Saw pod success Jun 7 22:07:30.767: INFO: Pod "downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b" satisfied condition "success or failure" Jun 7 22:07:30.770: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b container client-container: STEP: delete the pod Jun 7 22:07:30.786: INFO: Waiting for pod downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b to disappear Jun 7 22:07:30.791: INFO: Pod downwardapi-volume-cc068034-313c-458b-ac69-f97bf606267b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:30.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8894" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3300,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:30.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jun 7 22:07:30.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 7 22:07:33.878: INFO: stderr: "" Jun 7 22:07:33.878: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:33.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4973" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":201,"skipped":3321,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:33.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jun 7 22:07:33.934: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix509353540/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:33.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4250" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":202,"skipped":3329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:34.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:45.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9068" for this suite. • [SLOW TEST:11.111 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":203,"skipped":3353,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:45.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 7 22:07:45.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5665' Jun 7 22:07:45.328: INFO: stderr: "" Jun 7 22:07:45.328: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 7 22:07:50.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5665 -o json' Jun 7 22:07:50.493: INFO: stderr: "" Jun 7 22:07:50.493: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-07T22:07:45Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5665\",\n \"resourceVersion\": \"22541506\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5665/pods/e2e-test-httpd-pod\",\n \"uid\": \"6df486de-b2cb-4781-a9ab-5e7a576f9d66\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jg8sv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jg8sv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jg8sv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T22:07:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T22:07:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T22:07:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-07T22:07:45Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d184cc29d68e83cc3b12cd95c40ef965cffb5f03aaf7202ef1f81fb67d3b0d2a\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-07T22:07:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.83\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.83\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-07T22:07:45Z\"\n }\n}\n" STEP: replace the image in the pod Jun 7 22:07:50.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5665' Jun 7 22:07:50.815: INFO: stderr: "" Jun 7 22:07:50.815: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Jun 7 22:07:50.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5665' Jun 7 22:07:59.278: INFO: stderr: "" Jun 7 22:07:59.278: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:07:59.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5665" for this suite. • [SLOW TEST:14.157 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":204,"skipped":3365,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:07:59.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:08:03.443: INFO: Waiting up to 5m0s for pod "client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6" in namespace "pods-2587" to be "success or failure" Jun 7 22:08:03.448: INFO: Pod "client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985444ms Jun 7 22:08:05.451: INFO: Pod "client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008300011s Jun 7 22:08:07.455: INFO: Pod "client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011491714s STEP: Saw pod success Jun 7 22:08:07.455: INFO: Pod "client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6" satisfied condition "success or failure" Jun 7 22:08:07.457: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6 container env3cont: STEP: delete the pod Jun 7 22:08:07.507: INFO: Waiting for pod client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6 to disappear Jun 7 22:08:07.520: INFO: Pod client-envvars-d9638190-6f51-4fdb-8c30-68a9208938a6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:08:07.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2587" for this suite. • [SLOW TEST:8.241 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3377,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:08:07.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-3415 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3415 to expose endpoints map[] Jun 7 22:08:07.736: INFO: Get endpoints failed (58.570494ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 7 22:08:08.739: INFO: successfully validated that service endpoint-test2 in namespace services-3415 exposes endpoints map[] (1.061815717s elapsed) STEP: Creating pod pod1 in namespace services-3415 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3415 to expose endpoints map[pod1:[80]] Jun 7 22:08:11.852: INFO: successfully validated that service endpoint-test2 in namespace services-3415 exposes endpoints map[pod1:[80]] (3.106468188s elapsed) STEP: Creating pod pod2 in namespace services-3415 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3415 to expose endpoints map[pod1:[80] pod2:[80]] Jun 7 22:08:15.970: INFO: successfully validated that service endpoint-test2 in namespace services-3415 exposes endpoints map[pod1:[80] pod2:[80]] (4.114174487s elapsed) STEP: Deleting pod pod1 in namespace services-3415 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3415 to expose endpoints map[pod2:[80]] Jun 7 22:08:16.016: INFO: successfully validated that service endpoint-test2 in namespace services-3415 exposes endpoints map[pod2:[80]] (42.654567ms elapsed) STEP: Deleting pod pod2 in namespace services-3415 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3415 to expose endpoints map[] Jun 7 22:08:17.029: INFO: successfully validated that service endpoint-test2 in namespace services-3415 exposes endpoints map[] (1.008899553s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:08:17.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3415" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.537 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":206,"skipped":3387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:08:17.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 7 22:08:17.122: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:08:25.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4959" for this suite. • [SLOW TEST:8.574 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":207,"skipped":3417,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:08:25.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 7 22:08:30.272: INFO: Successfully updated pod "labelsupdated7d7dc07-1830-4fec-908a-b0b0b729347c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:08:32.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9548" for this suite. • [SLOW TEST:6.661 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3425,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:08:32.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:08:32.415: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f297c706-4467-4213-90c8-8bf0334b6e40" in namespace "security-context-test-478" to be "success or failure" Jun 7 22:08:32.426: INFO: Pod "busybox-privileged-false-f297c706-4467-4213-90c8-8bf0334b6e40": Phase="Pending", Reason="", readiness=false. Elapsed: 11.680063ms Jun 7 22:08:34.430: INFO: Pod "busybox-privileged-false-f297c706-4467-4213-90c8-8bf0334b6e40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01574618s Jun 7 22:08:36.434: INFO: Pod "busybox-privileged-false-f297c706-4467-4213-90c8-8bf0334b6e40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019472505s Jun 7 22:08:36.434: INFO: Pod "busybox-privileged-false-f297c706-4467-4213-90c8-8bf0334b6e40" satisfied condition "success or failure" Jun 7 22:08:36.439: INFO: Got logs for pod "busybox-privileged-false-f297c706-4467-4213-90c8-8bf0334b6e40": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:08:36.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-478" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3432,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:08:36.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jun 7 22:08:36.540: INFO: Waiting up to 5m0s for pod "var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3" in namespace "var-expansion-3612" to be "success or failure" Jun 7 22:08:36.546: INFO: Pod "var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.995166ms Jun 7 22:08:38.550: INFO: Pod "var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009425951s Jun 7 22:08:40.554: INFO: Pod "var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013091288s Jun 7 22:08:42.556: INFO: Pod "var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015649634s STEP: Saw pod success Jun 7 22:08:42.556: INFO: Pod "var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3" satisfied condition "success or failure" Jun 7 22:08:42.558: INFO: Trying to get logs from node jerma-worker pod var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3 container dapi-container: STEP: delete the pod Jun 7 22:08:42.579: INFO: Waiting for pod var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3 to disappear Jun 7 22:08:42.589: INFO: Pod var-expansion-65ec3a94-168d-4af5-a2bc-4829f65507a3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:08:42.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3612" for this suite. • [SLOW TEST:6.149 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:08:42.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jun 7 22:08:42.682: INFO: Waiting up to 5m0s for pod "client-containers-c12a8441-49e4-4451-80c3-d216092f8951" in namespace "containers-2451" to be "success or failure" Jun 7 22:08:42.715: INFO: Pod "client-containers-c12a8441-49e4-4451-80c3-d216092f8951": Phase="Pending", Reason="", readiness=false. Elapsed: 33.531148ms Jun 7 22:08:44.720: INFO: Pod "client-containers-c12a8441-49e4-4451-80c3-d216092f8951": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037998246s Jun 7 22:08:46.723: INFO: Pod "client-containers-c12a8441-49e4-4451-80c3-d216092f8951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041786687s STEP: Saw pod success Jun 7 22:08:46.723: INFO: Pod "client-containers-c12a8441-49e4-4451-80c3-d216092f8951" satisfied condition "success or failure" Jun 7 22:08:46.726: INFO: Trying to get logs from node jerma-worker2 pod client-containers-c12a8441-49e4-4451-80c3-d216092f8951 container test-container: STEP: delete the pod Jun 7 22:08:46.792: INFO: Waiting for pod client-containers-c12a8441-49e4-4451-80c3-d216092f8951 to disappear Jun 7 22:08:46.799: INFO: Pod client-containers-c12a8441-49e4-4451-80c3-d216092f8951 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:08:46.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2451" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:08:46.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 7 22:08:46.936: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 22:08:46.946: INFO: Waiting for terminating namespaces to be deleted... Jun 7 22:08:46.948: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 7 22:08:46.953: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:08:46.953: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 22:08:46.953: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:08:46.953: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 22:08:46.953: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 7 22:08:46.958: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:08:46.958: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 22:08:46.958: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 7 22:08:46.958: INFO: Container kube-hunter ready: false, restart count 0 Jun 7 22:08:46.958: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 7 22:08:46.958: INFO: Container kube-bench ready: false, restart count 0 Jun 7 22:08:46.958: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:08:46.958: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-956008fc-dc61-4675-aa64-d4af5383fe75 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-956008fc-dc61-4675-aa64-d4af5383fe75 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-956008fc-dc61-4675-aa64-d4af5383fe75 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:09:05.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7895" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.361 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":212,"skipped":3652,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:09:05.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 7 22:09:05.263: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:09:19.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1265" for this suite. • [SLOW TEST:14.281 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3657,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:09:19.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:09:19.655: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"853cee4f-ea2b-45e9-a191-750d9f2937a7", Controller:(*bool)(0xc0028adfc2), BlockOwnerDeletion:(*bool)(0xc0028adfc3)}} Jun 7 22:09:19.666: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c93963d2-cacc-474d-972c-0ee073940444", Controller:(*bool)(0xc0029082c2), BlockOwnerDeletion:(*bool)(0xc0029082c3)}} Jun 7 22:09:19.679: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b5f772e9-ed50-47f7-9326-1a3ae5eeba11", Controller:(*bool)(0xc00290858a), BlockOwnerDeletion:(*bool)(0xc00290858b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:09:24.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4491" for this suite. • [SLOW TEST:5.257 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":214,"skipped":3663,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:09:24.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 7 22:09:29.401: INFO: Successfully updated pod "labelsupdate61e9835a-76f3-483d-9a58-4db53819c00e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:09:31.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6978" for this suite. • [SLOW TEST:6.679 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3671,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:09:31.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-33616ade-b765-4490-9bbf-2d04f1c8245f STEP: Creating a pod to test consume configMaps Jun 7 22:09:31.486: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36" in namespace "projected-1228" to be "success or failure" Jun 7 22:09:31.489: INFO: Pod "pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.646578ms Jun 7 22:09:33.494: INFO: Pod "pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007887795s Jun 7 22:09:35.498: INFO: Pod "pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012400198s STEP: Saw pod success Jun 7 22:09:35.498: INFO: Pod "pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36" satisfied condition "success or failure" Jun 7 22:09:35.502: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36 container projected-configmap-volume-test: STEP: delete the pod Jun 7 22:09:35.522: INFO: Waiting for pod pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36 to disappear Jun 7 22:09:35.536: INFO: Pod pod-projected-configmaps-d22bfcc4-88ba-4df6-9b06-6b3ce49b2e36 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:09:35.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1228" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3678,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:09:35.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jun 7 22:09:36.135: INFO: created pod pod-service-account-defaultsa Jun 7 22:09:36.135: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 7 22:09:36.140: INFO: created pod pod-service-account-mountsa Jun 7 22:09:36.140: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 7 22:09:36.146: INFO: created pod pod-service-account-nomountsa Jun 7 22:09:36.146: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 7 22:09:36.206: INFO: created pod pod-service-account-defaultsa-mountspec Jun 7 22:09:36.206: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 7 22:09:36.216: INFO: created pod pod-service-account-mountsa-mountspec Jun 7 22:09:36.217: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 7 22:09:36.243: INFO: created pod pod-service-account-nomountsa-mountspec Jun 7 22:09:36.243: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 7 22:09:36.355: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 7 22:09:36.355: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 7 22:09:36.360: INFO: created pod pod-service-account-mountsa-nomountspec Jun 7 22:09:36.360: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 7 22:09:36.369: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 7 22:09:36.369: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:09:36.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7288" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":217,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:09:36.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2693 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2693 STEP: creating replication controller externalsvc in namespace services-2693 I0607 22:09:36.717508 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2693, replica count: 2 I0607 22:09:39.767998 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 22:09:42.768198 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 22:09:45.768490 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 22:09:48.768704 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 22:09:51.768975 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 7 22:09:51.834: INFO: Creating new exec pod Jun 7 22:09:55.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2693 execpodsbvtg -- /bin/sh -x -c nslookup nodeport-service' Jun 7 22:09:56.194: INFO: stderr: "I0607 22:09:55.999671 3530 log.go:172] (0xc000112d10) (0xc0006dbe00) Create stream\nI0607 22:09:55.999730 3530 log.go:172] (0xc000112d10) (0xc0006dbe00) Stream added, broadcasting: 1\nI0607 22:09:56.003087 3530 log.go:172] (0xc000112d10) Reply frame received for 1\nI0607 22:09:56.003130 3530 log.go:172] (0xc000112d10) (0xc00052d4a0) Create stream\nI0607 22:09:56.003140 3530 log.go:172] (0xc000112d10) (0xc00052d4a0) Stream added, broadcasting: 3\nI0607 22:09:56.004111 3530 log.go:172] (0xc000112d10) Reply frame received for 3\nI0607 22:09:56.004165 3530 log.go:172] (0xc000112d10) (0xc0006dbea0) Create stream\nI0607 22:09:56.004182 3530 log.go:172] (0xc000112d10) (0xc0006dbea0) Stream added, broadcasting: 5\nI0607 22:09:56.005082 3530 log.go:172] (0xc000112d10) Reply frame received for 5\nI0607 22:09:56.091134 3530 log.go:172] (0xc000112d10) Data frame received for 5\nI0607 22:09:56.091161 3530 log.go:172] (0xc0006dbea0) (5) Data frame handling\nI0607 22:09:56.091175 3530 log.go:172] (0xc0006dbea0) (5) Data frame sent\n+ nslookup nodeport-service\nI0607 22:09:56.183870 3530 log.go:172] (0xc000112d10) Data frame received for 3\nI0607 22:09:56.183905 3530 log.go:172] (0xc00052d4a0) (3) Data frame handling\nI0607 22:09:56.183925 3530 log.go:172] (0xc00052d4a0) (3) Data frame sent\nI0607 22:09:56.184898 3530 log.go:172] (0xc000112d10) Data frame received for 3\nI0607 22:09:56.184914 3530 log.go:172] (0xc00052d4a0) (3) Data frame handling\nI0607 22:09:56.184927 3530 log.go:172] (0xc00052d4a0) (3) Data frame sent\nI0607 22:09:56.185868 3530 log.go:172] (0xc000112d10) Data frame received for 3\nI0607 22:09:56.185888 3530 log.go:172] (0xc00052d4a0) (3) Data frame handling\nI0607 22:09:56.185909 3530 log.go:172] (0xc000112d10) Data frame received for 5\nI0607 22:09:56.185917 3530 log.go:172] (0xc0006dbea0) (5) Data frame handling\nI0607 22:09:56.188324 3530 log.go:172] (0xc000112d10) Data frame received for 1\nI0607 22:09:56.188365 3530 log.go:172] (0xc0006dbe00) (1) Data frame handling\nI0607 22:09:56.188386 3530 log.go:172] (0xc0006dbe00) (1) Data frame sent\nI0607 22:09:56.188412 3530 log.go:172] (0xc000112d10) (0xc0006dbe00) Stream removed, broadcasting: 1\nI0607 22:09:56.188429 3530 log.go:172] (0xc000112d10) Go away received\nI0607 22:09:56.188887 3530 log.go:172] (0xc000112d10) (0xc0006dbe00) Stream removed, broadcasting: 1\nI0607 22:09:56.188919 3530 log.go:172] (0xc000112d10) (0xc00052d4a0) Stream removed, broadcasting: 3\nI0607 22:09:56.188938 3530 log.go:172] (0xc000112d10) (0xc0006dbea0) Stream removed, broadcasting: 5\n" Jun 7 22:09:56.194: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2693.svc.cluster.local\tcanonical name = externalsvc.services-2693.svc.cluster.local.\nName:\texternalsvc.services-2693.svc.cluster.local\nAddress: 10.106.125.68\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2693, will wait for the garbage collector to delete the pods Jun 7 22:09:56.266: INFO: Deleting ReplicationController externalsvc took: 11.197847ms Jun 7 22:09:56.567: INFO: Terminating ReplicationController externalsvc pods took: 300.289222ms Jun 7 22:10:09.593: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:10:09.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2693" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:33.155 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":218,"skipped":3722,"failed":0} S ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:10:09.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-e08915b2-46dc-4d1a-8c2d-90d101c80a1c STEP: Creating secret with name s-test-opt-upd-c801f08d-1d0c-43d9-ab3e-a46f3101b845 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e08915b2-46dc-4d1a-8c2d-90d101c80a1c STEP: Updating secret s-test-opt-upd-c801f08d-1d0c-43d9-ab3e-a46f3101b845 STEP: Creating secret with name s-test-opt-create-dfd2dbc0-9e13-4c51-abc6-55792eaaf210 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:10:19.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7753" for this suite. • [SLOW TEST:10.179 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3723,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:10:19.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:10:19.908: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:10:20.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5457" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":220,"skipped":3724,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:10:20.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c272cfe9-083f-46c7-8f8c-303655692b42 STEP: Creating a pod to test consume configMaps Jun 7 22:10:20.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac" in namespace "configmap-1072" to be "success or failure" Jun 7 22:10:20.641: INFO: Pod "pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac": Phase="Pending", Reason="", readiness=false. Elapsed: 12.442423ms Jun 7 22:10:22.646: INFO: Pod "pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017118028s Jun 7 22:10:24.650: INFO: Pod "pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020987192s STEP: Saw pod success Jun 7 22:10:24.650: INFO: Pod "pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac" satisfied condition "success or failure" Jun 7 22:10:24.652: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac container configmap-volume-test: STEP: delete the pod Jun 7 22:10:24.673: INFO: Waiting for pod pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac to disappear Jun 7 22:10:24.687: INFO: Pod pod-configmaps-caee1358-2229-4e33-881d-3ddf4f790cac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:10:24.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1072" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3724,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:10:24.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jun 7 22:10:24.972: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 7 22:10:25.033: INFO: Waiting for terminating namespaces to be deleted... Jun 7 22:10:25.049: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jun 7 22:10:25.059: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:10:25.060: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 22:10:25.060: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:10:25.060: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 22:10:25.060: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jun 7 22:10:25.069: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:10:25.069: INFO: Container kindnet-cni ready: true, restart count 2 Jun 7 22:10:25.069: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jun 7 22:10:25.069: INFO: Container kube-bench ready: false, restart count 0 Jun 7 22:10:25.069: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jun 7 22:10:25.069: INFO: Container kube-proxy ready: true, restart count 0 Jun 7 22:10:25.069: INFO: pod-projected-secrets-b19630ae-8e91-4436-bc25-60e844d53c98 from projected-7753 started at 2020-06-07 22:10:09 +0000 UTC (3 container statuses recorded) Jun 7 22:10:25.069: INFO: Container creates-volume-test ready: true, restart count 0 Jun 7 22:10:25.069: INFO: Container dels-volume-test ready: true, restart count 0 Jun 7 22:10:25.069: INFO: Container upds-volume-test ready: true, restart count 0 Jun 7 22:10:25.069: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jun 7 22:10:25.069: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161662812239874c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16166281239033bf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:10:26.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7112" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":222,"skipped":3733,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:10:26.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 22:10:26.368: INFO: Waiting up to 5m0s for pod "downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725" in namespace "projected-6888" to be "success or failure" Jun 7 22:10:26.415: INFO: Pod "downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725": Phase="Pending", Reason="", readiness=false. Elapsed: 47.756856ms Jun 7 22:10:28.424: INFO: Pod "downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056742662s Jun 7 22:10:30.428: INFO: Pod "downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060375693s STEP: Saw pod success Jun 7 22:10:30.428: INFO: Pod "downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725" satisfied condition "success or failure" Jun 7 22:10:30.431: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725 container client-container: STEP: delete the pod Jun 7 22:10:30.486: INFO: Waiting for pod downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725 to disappear Jun 7 22:10:30.490: INFO: Pod downwardapi-volume-128b2225-7665-447a-b9fb-319be72ae725 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:10:30.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6888" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:10:30.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-fe308c83-ce81-4864-8dfa-fed54c921d8c STEP: Creating a pod to test consume secrets Jun 7 22:10:30.681: INFO: Waiting up to 5m0s for pod "pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab" in namespace "secrets-4691" to be "success or failure" Jun 7 22:10:30.700: INFO: Pod "pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab": Phase="Pending", Reason="", readiness=false. Elapsed: 19.151766ms Jun 7 22:10:32.703: INFO: Pod "pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021759763s Jun 7 22:10:34.706: INFO: Pod "pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02548548s STEP: Saw pod success Jun 7 22:10:34.706: INFO: Pod "pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab" satisfied condition "success or failure" Jun 7 22:10:34.709: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab container secret-volume-test: STEP: delete the pod Jun 7 22:10:34.762: INFO: Waiting for pod pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab to disappear Jun 7 22:10:34.772: INFO: Pod pod-secrets-27c67b3d-d6c8-4fbd-a4a2-84736ed298ab no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:10:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4691" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3806,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:10:34.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2580.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2580.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2580.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 22:10:40.960: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:40.964: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:40.969: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:40.977: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:40.980: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:40.982: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:40.984: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:40.991: INFO: Lookups using dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local] Jun 7 22:10:46.003: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:46.006: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:46.020: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:46.023: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:46.026: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:46.032: INFO: Lookups using dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f failed for: [wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2580.svc.cluster.local jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local] Jun 7 22:10:51.022: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:51.025: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:51.039: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:51.042: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:51.049: INFO: Lookups using dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f failed for: [wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local] Jun 7 22:10:56.004: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:56.007: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:56.023: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:56.026: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:10:56.032: INFO: Lookups using dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f failed for: [wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local] Jun 7 22:11:01.029: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:01.033: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:01.052: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:01.055: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:01.060: INFO: Lookups using dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f failed for: [wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local] Jun 7 22:11:06.004: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:06.008: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:06.025: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:06.028: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local from pod dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f: the server could not find the requested resource (get pods dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f) Jun 7 22:11:06.034: INFO: Lookups using dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f failed for: [wheezy_udp@dns-test-service-2.dns-2580.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2580.svc.cluster.local jessie_udp@dns-test-service-2.dns-2580.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2580.svc.cluster.local] Jun 7 22:11:11.027: INFO: DNS probes using dns-2580/dns-test-095b0eae-89e8-44b4-8a65-9ee16a22e45f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:11:11.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2580" for this suite. • [SLOW TEST:36.578 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":225,"skipped":3826,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:11:11.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:11:11.996: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 7 22:11:14.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7730 create -f -' Jun 7 22:11:18.096: INFO: stderr: "" Jun 7 22:11:18.096: INFO: stdout: "e2e-test-crd-publish-openapi-129-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 7 22:11:18.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7730 delete e2e-test-crd-publish-openapi-129-crds test-cr' Jun 7 22:11:18.197: INFO: stderr: "" Jun 7 22:11:18.197: INFO: stdout: "e2e-test-crd-publish-openapi-129-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 7 22:11:18.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7730 apply -f -' Jun 7 22:11:18.424: INFO: stderr: "" Jun 7 22:11:18.424: INFO: stdout: "e2e-test-crd-publish-openapi-129-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 7 22:11:18.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7730 delete e2e-test-crd-publish-openapi-129-crds test-cr' Jun 7 22:11:18.547: INFO: stderr: "" Jun 7 22:11:18.547: INFO: stdout: "e2e-test-crd-publish-openapi-129-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 7 22:11:18.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-129-crds' Jun 7 22:11:18.764: INFO: stderr: "" Jun 7 22:11:18.764: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-129-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:11:20.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7730" for this suite. • [SLOW TEST:9.256 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":226,"skipped":3826,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:11:20.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 22:11:20.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635" in namespace "projected-2811" to be "success or failure" Jun 7 22:11:20.763: INFO: Pod "downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635": Phase="Pending", Reason="", readiness=false. Elapsed: 34.000481ms Jun 7 22:11:22.767: INFO: Pod "downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037594082s Jun 7 22:11:24.771: INFO: Pod "downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04201554s STEP: Saw pod success Jun 7 22:11:24.771: INFO: Pod "downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635" satisfied condition "success or failure" Jun 7 22:11:24.774: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635 container client-container: STEP: delete the pod Jun 7 22:11:24.818: INFO: Waiting for pod downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635 to disappear Jun 7 22:11:24.822: INFO: Pod downwardapi-volume-5aae7163-8892-4d02-85ed-d8b5c19d5635 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:11:24.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2811" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3829,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:11:24.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 22:11:28.965: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:11:29.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1511" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3850,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:11:29.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:11:45.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8331" for this suite. • [SLOW TEST:16.126 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":229,"skipped":3866,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:11:45.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 7 22:11:45.222: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 7 22:11:55.578: INFO: >>> kubeConfig: /root/.kube/config Jun 7 22:11:57.497: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:12:07.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6947" for this suite. • [SLOW TEST:22.796 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":230,"skipped":3867,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:12:07.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:12:24.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7599" for this suite. • [SLOW TEST:16.250 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":231,"skipped":3867,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:12:24.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 22:12:24.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b" in namespace "downward-api-108" to be "success or failure" Jun 7 22:12:24.297: INFO: Pod "downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.37183ms Jun 7 22:12:26.303: INFO: Pod "downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010000155s Jun 7 22:12:28.308: INFO: Pod "downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01430934s STEP: Saw pod success Jun 7 22:12:28.308: INFO: Pod "downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b" satisfied condition "success or failure" Jun 7 22:12:28.311: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b container client-container: STEP: delete the pod Jun 7 22:12:28.353: INFO: Waiting for pod downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b to disappear Jun 7 22:12:28.381: INFO: Pod downwardapi-volume-20644952-f540-4cb8-a368-f6eaa902062b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:12:28.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-108" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3882,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:12:28.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:12:28.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6021" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":233,"skipped":3890,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:12:28.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:12:28.611: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 7 22:12:28.632: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 7 22:12:33.638: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 7 22:12:33.638: INFO: Creating deployment "test-rolling-update-deployment" Jun 7 22:12:33.692: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 7 22:12:33.699: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 7 22:12:35.706: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 7 22:12:35.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164753, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164753, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164753, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164753, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:12:37.713: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 7 22:12:37.721: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7070 /apis/apps/v1/namespaces/deployment-7070/deployments/test-rolling-update-deployment ae73311a-c4c9-4fba-ab7a-7fff880bec16 22543479 1 2020-06-07 22:12:33 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00373bee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-07 22:12:33 +0000 UTC,LastTransitionTime:2020-06-07 22:12:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-06-07 22:12:37 +0000 UTC,LastTransitionTime:2020-06-07 22:12:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 7 22:12:37.724: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7070 /apis/apps/v1/namespaces/deployment-7070/replicasets/test-rolling-update-deployment-67cf4f6444 15151ccf-931d-47de-a713-a0558bfc84ea 22543468 1 2020-06-07 22:12:33 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ae73311a-c4c9-4fba-ab7a-7fff880bec16 0xc0035e86a7 0xc0035e86a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035e8788 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 7 22:12:37.724: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 7 22:12:37.724: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7070 /apis/apps/v1/namespaces/deployment-7070/replicasets/test-rolling-update-controller 55949f3e-cd67-4345-89c5-0f1c659f96d9 22543477 2 2020-06-07 22:12:28 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ae73311a-c4c9-4fba-ab7a-7fff880bec16 0xc0035e8537 0xc0035e8538}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0035e8618 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 7 22:12:37.727: INFO: Pod "test-rolling-update-deployment-67cf4f6444-rlgld" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-rlgld test-rolling-update-deployment-67cf4f6444- deployment-7070 /api/v1/namespaces/deployment-7070/pods/test-rolling-update-deployment-67cf4f6444-rlgld 01ede922-160a-4adf-921b-df526606efd0 22543467 0 2020-06-07 22:12:33 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 15151ccf-931d-47de-a713-a0558bfc84ea 0xc00361a9e7 0xc00361a9e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zzdth,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zzdth,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zzdth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:12:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:12:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:12:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:12:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.108,StartTime:2020-06-07 22:12:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:12:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://57c9e5f607e81353602a6b7fa2b0993498a584f9177c69799d1152a6be56de47,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:12:37.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7070" for this suite. • [SLOW TEST:9.160 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":234,"skipped":3895,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:12:37.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:12:41.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8077" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3900,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:12:41.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 7 22:12:50.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 22:12:50.147: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 22:12:52.147: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 22:12:52.152: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 22:12:54.147: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 22:12:54.151: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 22:12:56.147: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 22:12:56.151: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 22:12:58.147: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 22:12:58.152: INFO: Pod pod-with-prestop-http-hook still exists Jun 7 22:13:00.147: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 7 22:13:00.152: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:13:00.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6686" for this suite. • [SLOW TEST:18.195 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3900,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:13:00.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3710 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3710 STEP: Creating statefulset with conflicting port in namespace statefulset-3710 STEP: Waiting until pod test-pod will start running in namespace statefulset-3710 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3710 Jun 7 22:13:04.390: INFO: Observed stateful pod in namespace: statefulset-3710, name: ss-0, uid: 2e46f3bb-107d-4873-bf9e-693755361bb2, status phase: Pending. Waiting for statefulset controller to delete. Jun 7 22:13:04.920: INFO: Observed stateful pod in namespace: statefulset-3710, name: ss-0, uid: 2e46f3bb-107d-4873-bf9e-693755361bb2, status phase: Failed. Waiting for statefulset controller to delete. Jun 7 22:13:04.927: INFO: Observed stateful pod in namespace: statefulset-3710, name: ss-0, uid: 2e46f3bb-107d-4873-bf9e-693755361bb2, status phase: Failed. Waiting for statefulset controller to delete. Jun 7 22:13:04.933: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3710 STEP: Removing pod with conflicting port in namespace statefulset-3710 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3710 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jun 7 22:13:08.986: INFO: Deleting all statefulset in ns statefulset-3710 Jun 7 22:13:08.988: INFO: Scaling statefulset ss to 0 Jun 7 22:13:29.014: INFO: Waiting for statefulset status.replicas updated to 0 Jun 7 22:13:29.017: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:13:29.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3710" for this suite. • [SLOW TEST:28.869 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":237,"skipped":3907,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:13:29.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-65cadec4-c7c7-464c-87f6-c6b51d740341 STEP: Creating a pod to test consume configMaps Jun 7 22:13:29.162: INFO: Waiting up to 5m0s for pod "pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab" in namespace "configmap-7643" to be "success or failure" Jun 7 22:13:29.166: INFO: Pod "pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168634ms Jun 7 22:13:31.170: INFO: Pod "pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007681993s Jun 7 22:13:33.174: INFO: Pod "pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011888871s STEP: Saw pod success Jun 7 22:13:33.174: INFO: Pod "pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab" satisfied condition "success or failure" Jun 7 22:13:33.176: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab container configmap-volume-test: STEP: delete the pod Jun 7 22:13:33.228: INFO: Waiting for pod pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab to disappear Jun 7 22:13:33.242: INFO: Pod pod-configmaps-69fbb2a3-49e0-4902-8589-2d9b56b921ab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:13:33.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7643" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:13:33.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:13:33.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9872" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":239,"skipped":3935,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:13:33.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-32604b94-4ce4-4f7d-821c-fe66def9e798 STEP: Creating a pod to test consume secrets Jun 7 22:13:33.516: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e" in namespace "projected-9337" to be "success or failure" Jun 7 22:13:33.521: INFO: Pod "pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.298839ms Jun 7 22:13:35.971: INFO: Pod "pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.455333221s Jun 7 22:13:37.976: INFO: Pod "pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.459765863s STEP: Saw pod success Jun 7 22:13:37.976: INFO: Pod "pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e" satisfied condition "success or failure" Jun 7 22:13:37.979: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e container projected-secret-volume-test: STEP: delete the pod Jun 7 22:13:38.000: INFO: Waiting for pod pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e to disappear Jun 7 22:13:38.004: INFO: Pod pod-projected-secrets-e213fd7c-a633-47ef-b56c-76d3d15add5e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:13:38.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9337" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3957,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:13:38.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-86f3923e-4e92-4467-bc07-5418af4ca725 STEP: Creating configMap with name cm-test-opt-upd-faf88d5e-6734-421b-9f54-eb6184db9d0e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-86f3923e-4e92-4467-bc07-5418af4ca725 STEP: Updating configmap cm-test-opt-upd-faf88d5e-6734-421b-9f54-eb6184db9d0e STEP: Creating configMap with name cm-test-opt-create-25bd8ecf-b39b-48c5-b213-423d76a492c4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:13:46.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8108" for this suite. • [SLOW TEST:8.186 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3968,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:13:46.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 22:13:46.976: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 22:13:48.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164827, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164827, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164827, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:13:50.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164827, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164827, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164827, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 22:13:54.258: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:13:55.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2154" for this suite. STEP: Destroying namespace "webhook-2154-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.917 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":242,"skipped":3973,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:13:55.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jun 7 22:13:59.800: INFO: Successfully updated pod "annotationupdatee1bfb3ae-bfcc-4499-974d-4c5b8253f9f1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:14:03.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3763" for this suite. • [SLOW TEST:8.707 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3979,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:14:03.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9987.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9987.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9987.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9987.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9987.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9987.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 22:14:10.008: INFO: DNS probes using dns-9987/dns-test-365b3c49-463d-4fa7-b4bb-1af397e1baac succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:14:10.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9987" for this suite. • [SLOW TEST:6.323 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":244,"skipped":3987,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:14:10.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-f43a588e-871a-491e-9461-5c4daa7d198f STEP: Creating configMap with name cm-test-opt-upd-ddb3f368-82b4-430d-ae3e-6a71fc14d774 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f43a588e-871a-491e-9461-5c4daa7d198f STEP: Updating configmap cm-test-opt-upd-ddb3f368-82b4-430d-ae3e-6a71fc14d774 STEP: Creating configMap with name cm-test-opt-create-02e71a11-0a70-4e7d-9119-208e684a3b69 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:15:25.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4167" for this suite. • [SLOW TEST:75.363 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4002,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:15:25.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0cbe05db-8415-4227-8a09-d51b083b75d9 STEP: Creating a pod to test consume secrets Jun 7 22:15:25.733: INFO: Waiting up to 5m0s for pod "pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529" in namespace "secrets-7882" to be "success or failure" Jun 7 22:15:25.738: INFO: Pod "pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601375ms Jun 7 22:15:27.804: INFO: Pod "pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070219819s Jun 7 22:15:29.809: INFO: Pod "pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075344988s STEP: Saw pod success Jun 7 22:15:29.809: INFO: Pod "pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529" satisfied condition "success or failure" Jun 7 22:15:29.812: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529 container secret-volume-test: STEP: delete the pod Jun 7 22:15:29.862: INFO: Waiting for pod pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529 to disappear Jun 7 22:15:29.917: INFO: Pod pod-secrets-4402a6f8-4ae9-4907-9d11-50bea5822529 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:15:29.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7882" for this suite. STEP: Destroying namespace "secret-namespace-5552" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4009,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:15:29.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 22:15:31.528: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 22:15:33.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:15:35.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727164931, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 22:15:38.638: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:15:38.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7150-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:15:39.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-302" for this suite. STEP: Destroying namespace "webhook-302-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.905 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":247,"skipped":4009,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:15:39.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4326 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4326;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4326 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4326;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4326.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4326.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4326.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4326.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4326.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4326.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4326.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.177.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.177.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.177.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.177.172_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4326 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4326;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4326 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4326;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4326.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4326.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4326.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4326.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4326.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4326.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4326.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4326.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4326.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.177.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.177.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.177.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.177.172_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 22:15:46.068: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.072: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.078: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.081: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.083: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.090: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.118: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.121: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.124: INFO: Unable to read jessie_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.126: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.129: INFO: Unable to read jessie_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.132: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.135: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.139: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:46.157: INFO: Lookups using dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4326 wheezy_tcp@dns-test-service.dns-4326 wheezy_udp@dns-test-service.dns-4326.svc wheezy_tcp@dns-test-service.dns-4326.svc wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4326 jessie_tcp@dns-test-service.dns-4326 jessie_udp@dns-test-service.dns-4326.svc jessie_tcp@dns-test-service.dns-4326.svc jessie_udp@_http._tcp.dns-test-service.dns-4326.svc jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc] Jun 7 22:15:51.163: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.167: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.174: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.177: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.184: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.187: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.208: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.211: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.215: INFO: Unable to read jessie_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.218: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.221: INFO: Unable to read jessie_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.226: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.229: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.231: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:51.250: INFO: Lookups using dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4326 wheezy_tcp@dns-test-service.dns-4326 wheezy_udp@dns-test-service.dns-4326.svc wheezy_tcp@dns-test-service.dns-4326.svc wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4326 jessie_tcp@dns-test-service.dns-4326 jessie_udp@dns-test-service.dns-4326.svc jessie_tcp@dns-test-service.dns-4326.svc jessie_udp@_http._tcp.dns-test-service.dns-4326.svc jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc] Jun 7 22:15:56.162: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.166: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.170: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.172: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.182: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.185: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.202: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.204: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.207: INFO: Unable to read jessie_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.210: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.213: INFO: Unable to read jessie_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.219: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:15:56.328: INFO: Lookups using dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4326 wheezy_tcp@dns-test-service.dns-4326 wheezy_udp@dns-test-service.dns-4326.svc wheezy_tcp@dns-test-service.dns-4326.svc wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4326 jessie_tcp@dns-test-service.dns-4326 jessie_udp@dns-test-service.dns-4326.svc jessie_tcp@dns-test-service.dns-4326.svc jessie_udp@_http._tcp.dns-test-service.dns-4326.svc jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc] Jun 7 22:16:01.163: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.167: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.174: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.184: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.187: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.208: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.212: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.216: INFO: Unable to read jessie_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.219: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.222: INFO: Unable to read jessie_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.224: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.227: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.229: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:01.248: INFO: Lookups using dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4326 wheezy_tcp@dns-test-service.dns-4326 wheezy_udp@dns-test-service.dns-4326.svc wheezy_tcp@dns-test-service.dns-4326.svc wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4326 jessie_tcp@dns-test-service.dns-4326 jessie_udp@dns-test-service.dns-4326.svc jessie_tcp@dns-test-service.dns-4326.svc jessie_udp@_http._tcp.dns-test-service.dns-4326.svc jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc] Jun 7 22:16:06.163: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.167: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.170: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.174: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.183: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.186: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.207: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.210: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.219: INFO: Unable to read jessie_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.224: INFO: Unable to read jessie_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.226: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.228: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.231: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:06.245: INFO: Lookups using dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4326 wheezy_tcp@dns-test-service.dns-4326 wheezy_udp@dns-test-service.dns-4326.svc wheezy_tcp@dns-test-service.dns-4326.svc wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4326 jessie_tcp@dns-test-service.dns-4326 jessie_udp@dns-test-service.dns-4326.svc jessie_tcp@dns-test-service.dns-4326.svc jessie_udp@_http._tcp.dns-test-service.dns-4326.svc jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc] Jun 7 22:16:11.161: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.165: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.168: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.172: INFO: Unable to read wheezy_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.175: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.177: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.180: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.202: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.205: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.208: INFO: Unable to read jessie_udp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.211: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326 from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.213: INFO: Unable to read jessie_udp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.218: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.221: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc from pod dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4: the server could not find the requested resource (get pods dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4) Jun 7 22:16:11.235: INFO: Lookups using dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4326 wheezy_tcp@dns-test-service.dns-4326 wheezy_udp@dns-test-service.dns-4326.svc wheezy_tcp@dns-test-service.dns-4326.svc wheezy_udp@_http._tcp.dns-test-service.dns-4326.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4326 jessie_tcp@dns-test-service.dns-4326 jessie_udp@dns-test-service.dns-4326.svc jessie_tcp@dns-test-service.dns-4326.svc jessie_udp@_http._tcp.dns-test-service.dns-4326.svc jessie_tcp@_http._tcp.dns-test-service.dns-4326.svc] Jun 7 22:16:16.245: INFO: DNS probes using dns-4326/dns-test-fdd8085d-2b7c-48bb-ae14-81a8f43046b4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:16:17.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4326" for this suite. • [SLOW TEST:37.245 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":248,"skipped":4017,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:16:17.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2378 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2378 STEP: Deleting pre-stop pod Jun 7 22:16:30.271: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:16:30.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2378" for this suite. • [SLOW TEST:13.251 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":249,"skipped":4020,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:16:30.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jun 7 22:16:30.395: INFO: Waiting up to 5m0s for pod "client-containers-a03b57a5-f979-4226-81aa-0f28a3964272" in namespace "containers-8893" to be "success or failure" Jun 7 22:16:30.399: INFO: Pod "client-containers-a03b57a5-f979-4226-81aa-0f28a3964272": Phase="Pending", Reason="", readiness=false. Elapsed: 3.385589ms Jun 7 22:16:32.414: INFO: Pod "client-containers-a03b57a5-f979-4226-81aa-0f28a3964272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018417592s Jun 7 22:16:34.417: INFO: Pod "client-containers-a03b57a5-f979-4226-81aa-0f28a3964272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022023395s STEP: Saw pod success Jun 7 22:16:34.417: INFO: Pod "client-containers-a03b57a5-f979-4226-81aa-0f28a3964272" satisfied condition "success or failure" Jun 7 22:16:34.420: INFO: Trying to get logs from node jerma-worker pod client-containers-a03b57a5-f979-4226-81aa-0f28a3964272 container test-container: STEP: delete the pod Jun 7 22:16:34.435: INFO: Waiting for pod client-containers-a03b57a5-f979-4226-81aa-0f28a3964272 to disappear Jun 7 22:16:34.447: INFO: Pod client-containers-a03b57a5-f979-4226-81aa-0f28a3964272 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:16:34.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8893" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4060,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:16:34.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:16:34.528: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:16:39.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4357" for this suite. • [SLOW TEST:5.186 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":251,"skipped":4064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:16:39.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ebbddbf3-320a-4d86-9b45-0c8ec82401b2 STEP: Creating a pod to test consume secrets Jun 7 22:16:39.762: INFO: Waiting up to 5m0s for pod "pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52" in namespace "secrets-2071" to be "success or failure" Jun 7 22:16:39.816: INFO: Pod "pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52": Phase="Pending", Reason="", readiness=false. Elapsed: 54.42268ms Jun 7 22:16:41.900: INFO: Pod "pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137962754s Jun 7 22:16:43.904: INFO: Pod "pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141742195s STEP: Saw pod success Jun 7 22:16:43.904: INFO: Pod "pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52" satisfied condition "success or failure" Jun 7 22:16:43.906: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52 container secret-env-test: STEP: delete the pod Jun 7 22:16:43.928: INFO: Waiting for pod pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52 to disappear Jun 7 22:16:43.932: INFO: Pod pod-secrets-3751d24c-1650-4704-a2c7-a7d691b8ed52 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:16:43.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2071" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:16:43.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jun 7 22:16:44.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e" in namespace "projected-6041" to be "success or failure" Jun 7 22:16:44.110: INFO: Pod "downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 72.963048ms Jun 7 22:16:46.113: INFO: Pod "downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076411403s Jun 7 22:16:48.117: INFO: Pod "downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e": Phase="Running", Reason="", readiness=true. Elapsed: 4.080633984s Jun 7 22:16:50.121: INFO: Pod "downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084855347s STEP: Saw pod success Jun 7 22:16:50.122: INFO: Pod "downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e" satisfied condition "success or failure" Jun 7 22:16:50.125: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e container client-container: STEP: delete the pod Jun 7 22:16:50.182: INFO: Waiting for pod downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e to disappear Jun 7 22:16:50.196: INFO: Pod downwardapi-volume-63eec936-fec8-4549-9d1d-cfc3d8d79c2e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:16:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6041" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4160,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:16:50.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-96cabb90-9692-401f-bba6-96a04ea10e31 STEP: Creating a pod to test consume configMaps Jun 7 22:16:50.344: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768" in namespace "projected-6530" to be "success or failure" Jun 7 22:16:50.372: INFO: Pod "pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768": Phase="Pending", Reason="", readiness=false. Elapsed: 27.662128ms Jun 7 22:16:52.375: INFO: Pod "pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030939527s Jun 7 22:16:54.379: INFO: Pod "pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034834131s STEP: Saw pod success Jun 7 22:16:54.379: INFO: Pod "pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768" satisfied condition "success or failure" Jun 7 22:16:54.382: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768 container projected-configmap-volume-test: STEP: delete the pod Jun 7 22:16:54.409: INFO: Waiting for pod pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768 to disappear Jun 7 22:16:54.435: INFO: Pod pod-projected-configmaps-c78ff330-01ac-4e3c-891a-8dae9aca5768 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:16:54.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6530" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4171,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:16:54.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 7 22:16:54.861: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 22:16:54.878: INFO: Number of nodes with available pods: 0 Jun 7 22:16:54.879: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:16:55.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 22:16:55.888: INFO: Number of nodes with available pods: 0 Jun 7 22:16:55.888: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:16:56.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 22:16:56.889: INFO: Number of nodes with available pods: 0 Jun 7 22:16:56.889: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:16:57.920: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 22:16:57.923: INFO: Number of nodes with available pods: 0 Jun 7 22:16:57.923: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:16:58.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 22:16:58.887: INFO: Number of nodes with available pods: 1 Jun 7 22:16:58.887: INFO: Node jerma-worker is running more than one daemon pod Jun 7 22:16:59.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 22:16:59.888: INFO: Number of nodes with available pods: 2 Jun 7 22:16:59.888: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 7 22:16:59.923: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 7 22:16:59.934: INFO: Number of nodes with available pods: 2 Jun 7 22:16:59.934: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8384, will wait for the garbage collector to delete the pods Jun 7 22:17:01.016: INFO: Deleting DaemonSet.extensions daemon-set took: 6.250084ms Jun 7 22:17:01.316: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.211551ms Jun 7 22:17:09.519: INFO: Number of nodes with available pods: 0 Jun 7 22:17:09.519: INFO: Number of running nodes: 0, number of available pods: 0 Jun 7 22:17:09.521: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8384/daemonsets","resourceVersion":"22545239"},"items":null} Jun 7 22:17:09.523: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8384/pods","resourceVersion":"22545239"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:09.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8384" for this suite. • [SLOW TEST:15.114 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":255,"skipped":4188,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:09.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3c176e0f-a211-41f2-92ea-bc1e5def3607 STEP: Creating a pod to test consume configMaps Jun 7 22:17:09.623: INFO: Waiting up to 5m0s for pod "pod-configmaps-95c73143-7175-4c27-8647-73bde652c359" in namespace "configmap-8746" to be "success or failure" Jun 7 22:17:09.641: INFO: Pod "pod-configmaps-95c73143-7175-4c27-8647-73bde652c359": Phase="Pending", Reason="", readiness=false. Elapsed: 18.191371ms Jun 7 22:17:11.645: INFO: Pod "pod-configmaps-95c73143-7175-4c27-8647-73bde652c359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022479676s Jun 7 22:17:13.650: INFO: Pod "pod-configmaps-95c73143-7175-4c27-8647-73bde652c359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026831507s STEP: Saw pod success Jun 7 22:17:13.650: INFO: Pod "pod-configmaps-95c73143-7175-4c27-8647-73bde652c359" satisfied condition "success or failure" Jun 7 22:17:13.653: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-95c73143-7175-4c27-8647-73bde652c359 container configmap-volume-test: STEP: delete the pod Jun 7 22:17:13.670: INFO: Waiting for pod pod-configmaps-95c73143-7175-4c27-8647-73bde652c359 to disappear Jun 7 22:17:13.719: INFO: Pod pod-configmaps-95c73143-7175-4c27-8647-73bde652c359 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:13.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8746" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4193,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:13.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jun 7 22:17:14.045: INFO: Waiting up to 5m0s for pod "downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd" in namespace "downward-api-2278" to be "success or failure" Jun 7 22:17:14.065: INFO: Pod "downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.33762ms Jun 7 22:17:16.072: INFO: Pod "downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027149332s Jun 7 22:17:18.077: INFO: Pod "downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031918367s STEP: Saw pod success Jun 7 22:17:18.077: INFO: Pod "downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd" satisfied condition "success or failure" Jun 7 22:17:18.080: INFO: Trying to get logs from node jerma-worker pod downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd container dapi-container: STEP: delete the pod Jun 7 22:17:18.150: INFO: Waiting for pod downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd to disappear Jun 7 22:17:18.155: INFO: Pod downward-api-9d6a4958-a45a-4bce-b146-cd9ab380edfd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:18.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2278" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:18.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2676 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2676 I0607 22:17:18.402787 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2676, replica count: 2 I0607 22:17:21.453414 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0607 22:17:24.453659 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 7 22:17:24.453: INFO: Creating new exec pod Jun 7 22:17:29.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2676 execpodz6sf6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 7 22:17:29.710: INFO: stderr: "I0607 22:17:29.599802 3667 log.go:172] (0xc0001056b0) (0xc0006b7ae0) Create stream\nI0607 22:17:29.599864 3667 log.go:172] (0xc0001056b0) (0xc0006b7ae0) Stream added, broadcasting: 1\nI0607 22:17:29.602985 3667 log.go:172] (0xc0001056b0) Reply frame received for 1\nI0607 22:17:29.603028 3667 log.go:172] (0xc0001056b0) (0xc0006b7cc0) Create stream\nI0607 22:17:29.603041 3667 log.go:172] (0xc0001056b0) (0xc0006b7cc0) Stream added, broadcasting: 3\nI0607 22:17:29.604052 3667 log.go:172] (0xc0001056b0) Reply frame received for 3\nI0607 22:17:29.604089 3667 log.go:172] (0xc0001056b0) (0xc00095a000) Create stream\nI0607 22:17:29.604101 3667 log.go:172] (0xc0001056b0) (0xc00095a000) Stream added, broadcasting: 5\nI0607 22:17:29.605102 3667 log.go:172] (0xc0001056b0) Reply frame received for 5\nI0607 22:17:29.682069 3667 log.go:172] (0xc0001056b0) Data frame received for 5\nI0607 22:17:29.682099 3667 log.go:172] (0xc00095a000) (5) Data frame handling\nI0607 22:17:29.682120 3667 log.go:172] (0xc00095a000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0607 22:17:29.700478 3667 log.go:172] (0xc0001056b0) Data frame received for 5\nI0607 22:17:29.700506 3667 log.go:172] (0xc00095a000) (5) Data frame handling\nI0607 22:17:29.700524 3667 log.go:172] (0xc00095a000) (5) Data frame sent\nI0607 22:17:29.700533 3667 log.go:172] (0xc0001056b0) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0607 22:17:29.700542 3667 log.go:172] (0xc00095a000) (5) Data frame handling\nI0607 22:17:29.700840 3667 log.go:172] (0xc0001056b0) Data frame received for 3\nI0607 22:17:29.700876 3667 log.go:172] (0xc0006b7cc0) (3) Data frame handling\nI0607 22:17:29.702743 3667 log.go:172] (0xc0001056b0) Data frame received for 1\nI0607 22:17:29.702782 3667 log.go:172] (0xc0006b7ae0) (1) Data frame handling\nI0607 22:17:29.702811 3667 log.go:172] (0xc0006b7ae0) (1) Data frame sent\nI0607 22:17:29.702839 3667 log.go:172] (0xc0001056b0) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0607 22:17:29.702865 3667 log.go:172] (0xc0001056b0) Go away received\nI0607 22:17:29.703403 3667 log.go:172] (0xc0001056b0) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0607 22:17:29.703430 3667 log.go:172] (0xc0001056b0) (0xc0006b7cc0) Stream removed, broadcasting: 3\nI0607 22:17:29.703442 3667 log.go:172] (0xc0001056b0) (0xc00095a000) Stream removed, broadcasting: 5\n" Jun 7 22:17:29.710: INFO: stdout: "" Jun 7 22:17:29.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2676 execpodz6sf6 -- /bin/sh -x -c nc -zv -t -w 2 10.111.157.83 80' Jun 7 22:17:29.908: INFO: stderr: "I0607 22:17:29.827311 3688 log.go:172] (0xc000b2ce70) (0xc000b243c0) Create stream\nI0607 22:17:29.827382 3688 log.go:172] (0xc000b2ce70) (0xc000b243c0) Stream added, broadcasting: 1\nI0607 22:17:29.831498 3688 log.go:172] (0xc000b2ce70) Reply frame received for 1\nI0607 22:17:29.831541 3688 log.go:172] (0xc000b2ce70) (0xc000b24460) Create stream\nI0607 22:17:29.831549 3688 log.go:172] (0xc000b2ce70) (0xc000b24460) Stream added, broadcasting: 3\nI0607 22:17:29.832571 3688 log.go:172] (0xc000b2ce70) Reply frame received for 3\nI0607 22:17:29.832611 3688 log.go:172] (0xc000b2ce70) (0xc000b900a0) Create stream\nI0607 22:17:29.832627 3688 log.go:172] (0xc000b2ce70) (0xc000b900a0) Stream added, broadcasting: 5\nI0607 22:17:29.833911 3688 log.go:172] (0xc000b2ce70) Reply frame received for 5\nI0607 22:17:29.901378 3688 log.go:172] (0xc000b2ce70) Data frame received for 5\nI0607 22:17:29.901411 3688 log.go:172] (0xc000b900a0) (5) Data frame handling\nI0607 22:17:29.901427 3688 log.go:172] (0xc000b900a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.111.157.83 80\nConnection to 10.111.157.83 80 port [tcp/http] succeeded!\nI0607 22:17:29.901502 3688 log.go:172] (0xc000b2ce70) Data frame received for 5\nI0607 22:17:29.901527 3688 log.go:172] (0xc000b2ce70) Data frame received for 3\nI0607 22:17:29.901564 3688 log.go:172] (0xc000b24460) (3) Data frame handling\nI0607 22:17:29.901591 3688 log.go:172] (0xc000b900a0) (5) Data frame handling\nI0607 22:17:29.903097 3688 log.go:172] (0xc000b2ce70) Data frame received for 1\nI0607 22:17:29.903113 3688 log.go:172] (0xc000b243c0) (1) Data frame handling\nI0607 22:17:29.903127 3688 log.go:172] (0xc000b243c0) (1) Data frame sent\nI0607 22:17:29.903141 3688 log.go:172] (0xc000b2ce70) (0xc000b243c0) Stream removed, broadcasting: 1\nI0607 22:17:29.903426 3688 log.go:172] (0xc000b2ce70) (0xc000b243c0) Stream removed, broadcasting: 1\nI0607 22:17:29.903442 3688 log.go:172] (0xc000b2ce70) Go away received\nI0607 22:17:29.903464 3688 log.go:172] (0xc000b2ce70) (0xc000b24460) Stream removed, broadcasting: 3\nI0607 22:17:29.903485 3688 log.go:172] (0xc000b2ce70) (0xc000b900a0) Stream removed, broadcasting: 5\n" Jun 7 22:17:29.908: INFO: stdout: "" Jun 7 22:17:29.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2676 execpodz6sf6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30810' Jun 7 22:17:30.117: INFO: stderr: "I0607 22:17:30.031478 3707 log.go:172] (0xc0009b8630) (0xc00098e000) Create stream\nI0607 22:17:30.031531 3707 log.go:172] (0xc0009b8630) (0xc00098e000) Stream added, broadcasting: 1\nI0607 22:17:30.034506 3707 log.go:172] (0xc0009b8630) Reply frame received for 1\nI0607 22:17:30.034552 3707 log.go:172] (0xc0009b8630) (0xc00098e0a0) Create stream\nI0607 22:17:30.034566 3707 log.go:172] (0xc0009b8630) (0xc00098e0a0) Stream added, broadcasting: 3\nI0607 22:17:30.035559 3707 log.go:172] (0xc0009b8630) Reply frame received for 3\nI0607 22:17:30.035602 3707 log.go:172] (0xc0009b8630) (0xc0006a5b80) Create stream\nI0607 22:17:30.035617 3707 log.go:172] (0xc0009b8630) (0xc0006a5b80) Stream added, broadcasting: 5\nI0607 22:17:30.036472 3707 log.go:172] (0xc0009b8630) Reply frame received for 5\nI0607 22:17:30.108914 3707 log.go:172] (0xc0009b8630) Data frame received for 5\nI0607 22:17:30.108965 3707 log.go:172] (0xc0006a5b80) (5) Data frame handling\nI0607 22:17:30.108993 3707 log.go:172] (0xc0006a5b80) (5) Data frame sent\nI0607 22:17:30.109013 3707 log.go:172] (0xc0009b8630) Data frame received for 5\nI0607 22:17:30.109050 3707 log.go:172] (0xc0006a5b80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30810\nConnection to 172.17.0.10 30810 port [tcp/30810] succeeded!\nI0607 22:17:30.109326 3707 log.go:172] (0xc0006a5b80) (5) Data frame sent\nI0607 22:17:30.109447 3707 log.go:172] (0xc0009b8630) Data frame received for 5\nI0607 22:17:30.109469 3707 log.go:172] (0xc0006a5b80) (5) Data frame handling\nI0607 22:17:30.109508 3707 log.go:172] (0xc0009b8630) Data frame received for 3\nI0607 22:17:30.109529 3707 log.go:172] (0xc00098e0a0) (3) Data frame handling\nI0607 22:17:30.111268 3707 log.go:172] (0xc0009b8630) Data frame received for 1\nI0607 22:17:30.111296 3707 log.go:172] (0xc00098e000) (1) Data frame handling\nI0607 22:17:30.111309 3707 log.go:172] (0xc00098e000) (1) Data frame sent\nI0607 22:17:30.111324 3707 log.go:172] (0xc0009b8630) (0xc00098e000) Stream removed, broadcasting: 1\nI0607 22:17:30.111382 3707 log.go:172] (0xc0009b8630) Go away received\nI0607 22:17:30.111595 3707 log.go:172] (0xc0009b8630) (0xc00098e000) Stream removed, broadcasting: 1\nI0607 22:17:30.111609 3707 log.go:172] (0xc0009b8630) (0xc00098e0a0) Stream removed, broadcasting: 3\nI0607 22:17:30.111622 3707 log.go:172] (0xc0009b8630) (0xc0006a5b80) Stream removed, broadcasting: 5\n" Jun 7 22:17:30.117: INFO: stdout: "" Jun 7 22:17:30.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2676 execpodz6sf6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30810' Jun 7 22:17:30.343: INFO: stderr: "I0607 22:17:30.244844 3728 log.go:172] (0xc000a460b0) (0xc00037f4a0) Create stream\nI0607 22:17:30.244906 3728 log.go:172] (0xc000a460b0) (0xc00037f4a0) Stream added, broadcasting: 1\nI0607 22:17:30.248217 3728 log.go:172] (0xc000a460b0) Reply frame received for 1\nI0607 22:17:30.248270 3728 log.go:172] (0xc000a460b0) (0xc000ad8000) Create stream\nI0607 22:17:30.248283 3728 log.go:172] (0xc000a460b0) (0xc000ad8000) Stream added, broadcasting: 3\nI0607 22:17:30.249625 3728 log.go:172] (0xc000a460b0) Reply frame received for 3\nI0607 22:17:30.249666 3728 log.go:172] (0xc000a460b0) (0xc0006eba40) Create stream\nI0607 22:17:30.249684 3728 log.go:172] (0xc000a460b0) (0xc0006eba40) Stream added, broadcasting: 5\nI0607 22:17:30.250622 3728 log.go:172] (0xc000a460b0) Reply frame received for 5\nI0607 22:17:30.334804 3728 log.go:172] (0xc000a460b0) Data frame received for 5\nI0607 22:17:30.334835 3728 log.go:172] (0xc0006eba40) (5) Data frame handling\nI0607 22:17:30.334846 3728 log.go:172] (0xc0006eba40) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30810\nConnection to 172.17.0.8 30810 port [tcp/30810] succeeded!\nI0607 22:17:30.334992 3728 log.go:172] (0xc000a460b0) Data frame received for 5\nI0607 22:17:30.335001 3728 log.go:172] (0xc0006eba40) (5) Data frame handling\nI0607 22:17:30.335044 3728 log.go:172] (0xc000a460b0) Data frame received for 3\nI0607 22:17:30.335056 3728 log.go:172] (0xc000ad8000) (3) Data frame handling\nI0607 22:17:30.337092 3728 log.go:172] (0xc000a460b0) Data frame received for 1\nI0607 22:17:30.337102 3728 log.go:172] (0xc00037f4a0) (1) Data frame handling\nI0607 22:17:30.337108 3728 log.go:172] (0xc00037f4a0) (1) Data frame sent\nI0607 22:17:30.337495 3728 log.go:172] (0xc000a460b0) (0xc00037f4a0) Stream removed, broadcasting: 1\nI0607 22:17:30.337581 3728 log.go:172] (0xc000a460b0) Go away received\nI0607 22:17:30.337750 3728 log.go:172] (0xc000a460b0) (0xc00037f4a0) Stream removed, broadcasting: 1\nI0607 22:17:30.337762 3728 log.go:172] (0xc000a460b0) (0xc000ad8000) Stream removed, broadcasting: 3\nI0607 22:17:30.337767 3728 log.go:172] (0xc000a460b0) (0xc0006eba40) Stream removed, broadcasting: 5\n" Jun 7 22:17:30.343: INFO: stdout: "" Jun 7 22:17:30.343: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:30.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2676" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.223 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":258,"skipped":4228,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:30.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-deefcd20-f44a-48b7-9912-f85bc40697bc [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:30.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3586" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":259,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:30.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:34.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1476" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4268,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:34.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:17:34.624: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 7 22:17:37.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 create -f -' Jun 7 22:17:41.547: INFO: stderr: "" Jun 7 22:17:41.547: INFO: stdout: "e2e-test-crd-publish-openapi-8362-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 7 22:17:41.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 delete e2e-test-crd-publish-openapi-8362-crds test-foo' Jun 7 22:17:41.664: INFO: stderr: "" Jun 7 22:17:41.664: INFO: stdout: "e2e-test-crd-publish-openapi-8362-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 7 22:17:41.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 apply -f -' Jun 7 22:17:41.909: INFO: stderr: "" Jun 7 22:17:41.909: INFO: stdout: "e2e-test-crd-publish-openapi-8362-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 7 22:17:41.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 delete e2e-test-crd-publish-openapi-8362-crds test-foo' Jun 7 22:17:42.012: INFO: stderr: "" Jun 7 22:17:42.012: INFO: stdout: "e2e-test-crd-publish-openapi-8362-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 7 22:17:42.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 create -f -' Jun 7 22:17:42.277: INFO: rc: 1 Jun 7 22:17:42.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 apply -f -' Jun 7 22:17:42.541: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 7 22:17:42.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 create -f -' Jun 7 22:17:42.768: INFO: rc: 1 Jun 7 22:17:42.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1943 apply -f -' Jun 7 22:17:43.034: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 7 22:17:43.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8362-crds' Jun 7 22:17:43.287: INFO: stderr: "" Jun 7 22:17:43.287: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8362-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 7 22:17:43.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8362-crds.metadata' Jun 7 22:17:43.518: INFO: stderr: "" Jun 7 22:17:43.518: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8362-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 7 22:17:43.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8362-crds.spec' Jun 7 22:17:43.750: INFO: stderr: "" Jun 7 22:17:43.750: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8362-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 7 22:17:43.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8362-crds.spec.bars' Jun 7 22:17:44.017: INFO: stderr: "" Jun 7 22:17:44.017: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8362-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 7 22:17:44.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8362-crds.spec.bars2' Jun 7 22:17:44.291: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:47.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1943" for this suite. • [SLOW TEST:12.596 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":261,"skipped":4274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:47.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d255a947-e81e-4d74-b213-099a4bcdd180 STEP: Creating a pod to test consume configMaps Jun 7 22:17:47.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a" in namespace "projected-3088" to be "success or failure" Jun 7 22:17:47.361: INFO: Pod "pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591612ms Jun 7 22:17:49.365: INFO: Pod "pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00855754s Jun 7 22:17:51.369: INFO: Pod "pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012870581s Jun 7 22:17:53.374: INFO: Pod "pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017611866s STEP: Saw pod success Jun 7 22:17:53.374: INFO: Pod "pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a" satisfied condition "success or failure" Jun 7 22:17:53.377: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a container projected-configmap-volume-test: STEP: delete the pod Jun 7 22:17:53.410: INFO: Waiting for pod pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a to disappear Jun 7 22:17:53.419: INFO: Pod pod-projected-configmaps-7a4deb2f-6585-4352-becd-3578e2ebbb1a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3088" for this suite. • [SLOW TEST:6.262 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4299,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:53.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b98c2f87-3f94-4c0f-a69d-d680150130ca STEP: Creating a pod to test consume configMaps Jun 7 22:17:53.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01" in namespace "configmap-6276" to be "success or failure" Jun 7 22:17:53.503: INFO: Pod "pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039103ms Jun 7 22:17:55.518: INFO: Pod "pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018482933s Jun 7 22:17:57.521: INFO: Pod "pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01": Phase="Running", Reason="", readiness=true. Elapsed: 4.022249214s Jun 7 22:17:59.526: INFO: Pod "pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026656892s STEP: Saw pod success Jun 7 22:17:59.526: INFO: Pod "pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01" satisfied condition "success or failure" Jun 7 22:17:59.529: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01 container configmap-volume-test: STEP: delete the pod Jun 7 22:17:59.576: INFO: Waiting for pod pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01 to disappear Jun 7 22:17:59.593: INFO: Pod pod-configmaps-ac5377cd-e77d-4558-8814-1bc9c1dced01 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:17:59.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6276" for this suite. • [SLOW TEST:6.174 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4300,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:17:59.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 7 22:18:03.701: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:18:03.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2757" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4307,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:18:03.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-7ba3aacd-0ac2-40b6-bc6d-31bad2bc1618 in namespace container-probe-7869 Jun 7 22:18:07.923: INFO: Started pod liveness-7ba3aacd-0ac2-40b6-bc6d-31bad2bc1618 in namespace container-probe-7869 STEP: checking the pod's current state and verifying that restartCount is present Jun 7 22:18:07.926: INFO: Initial restart count of pod liveness-7ba3aacd-0ac2-40b6-bc6d-31bad2bc1618 is 0 Jun 7 22:18:25.968: INFO: Restart count of pod container-probe-7869/liveness-7ba3aacd-0ac2-40b6-bc6d-31bad2bc1618 is now 1 (18.042696001s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:18:26.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7869" for this suite. • [SLOW TEST:22.281 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4312,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:18:26.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:18:26.058: INFO: Creating deployment "webserver-deployment" Jun 7 22:18:26.111: INFO: Waiting for observed generation 1 Jun 7 22:18:28.195: INFO: Waiting for all required pods to come up Jun 7 22:18:28.199: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 7 22:18:38.208: INFO: Waiting for deployment "webserver-deployment" to complete Jun 7 22:18:38.215: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 7 22:18:38.220: INFO: Updating deployment webserver-deployment Jun 7 22:18:38.220: INFO: Waiting for observed generation 2 Jun 7 22:18:40.338: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 7 22:18:40.341: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 7 22:18:40.356: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 7 22:18:40.363: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 7 22:18:40.363: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 7 22:18:40.365: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 7 22:18:40.368: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 7 22:18:40.368: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 7 22:18:40.373: INFO: Updating deployment webserver-deployment Jun 7 22:18:40.373: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 7 22:18:40.500: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 7 22:18:40.739: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 7 22:18:41.129: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7343 /apis/apps/v1/namespaces/deployment-7343/deployments/webserver-deployment 02e532f7-875e-4b02-90c4-7e6c09e25ca0 22546010 3 2020-06-07 22:18:26 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00475b488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-06-07 22:18:39 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-07 22:18:40 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 7 22:18:41.268: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7343 /apis/apps/v1/namespaces/deployment-7343/replicasets/webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 22546070 3 2020-06-07 22:18:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 02e532f7-875e-4b02-90c4-7e6c09e25ca0 0xc00475bb47 0xc00475bb48}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00475bbb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 7 22:18:41.268: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 7 22:18:41.269: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7343 /apis/apps/v1/namespaces/deployment-7343/replicasets/webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 22546062 3 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 02e532f7-875e-4b02-90c4-7e6c09e25ca0 0xc00475ba17 0xc00475ba18}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00475bab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 7 22:18:41.407: INFO: Pod "webserver-deployment-595b5b9587-2x45c" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2x45c webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-2x45c 647c46b5-bc08-446c-a6f5-933428db1317 22545922 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c0297 0xc0045c0298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.127,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eb4357e98cf7eb6549a9a95aac5aef948479666f966885b8ccdbdbe2722f013b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.407: INFO: Pod "webserver-deployment-595b5b9587-7kbtw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7kbtw webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-7kbtw f91e3194-90bf-4223-bacd-6a36df681fc6 22546051 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c0587 0xc0045c0588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.407: INFO: Pod "webserver-deployment-595b5b9587-7z5cr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7z5cr webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-7z5cr 35c5799c-e4b1-4742-a071-30962da93310 22546052 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c0777 0xc0045c0778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.407: INFO: Pod "webserver-deployment-595b5b9587-8slpb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8slpb webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-8slpb 47a94f09-ae2d-4bdc-816e-f6ba2ff93cde 22545906 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c0957 0xc0045c0958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.16,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0dafda216729d670cb2c9afcd06d49676178a38bbb7405e9336bc43c01f4fdac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.407: INFO: Pod "webserver-deployment-595b5b9587-b4cdj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b4cdj webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-b4cdj 81650e7e-aa58-47a1-a246-40621196d32c 22546074 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c0bd7 0xc0045c0bd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-07 22:18:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-bq5hw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bq5hw webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-bq5hw 75f0a28d-b5b2-4fc5-b681-81788c6937d2 22546037 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c0e07 0xc0045c0e08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-cr66x" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cr66x webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-cr66x 3c3a8ba2-45c5-4d26-9e5d-da7bc1782930 22545913 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c0f47 0xc0045c0f48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.129,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://af8faecf58336495ecc4e6a4e2fc66f44cfe1871ad3d936495a36c7d028915b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-fgkfn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fgkfn webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-fgkfn 626f2d0e-c206-4d47-b74e-f4801a2dd2cb 22545919 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c1117 0xc0045c1118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.130,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ca475ebaf45d51eb2ce902a61b59e10e09013dab65bdfd42b05544aba602b372,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-gc4zd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gc4zd webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-gc4zd a500486a-093a-48c5-b6cb-b3e6851b7f3b 22545882 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c12c7 0xc0045c12c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.15,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://255473ae36dc2cc44ae8b9f55aa5bee2724c339d1d9ee6cbd65c45c4e5c7ad40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-gcjxx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gcjxx webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-gcjxx c160e633-2b42-48bf-9072-89a760c482ae 22546026 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c1447 0xc0045c1448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-gqld9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gqld9 webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-gqld9 04051a66-7399-4a0d-9cb9-0a86ecaadff3 22546038 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c1607 0xc0045c1608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-h7psk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h7psk webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-h7psk 6acf0413-be0c-432c-a49a-578659ff1de0 22546072 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c17a7 0xc0045c17a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-07 22:18:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.408: INFO: Pod "webserver-deployment-595b5b9587-lrmjs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lrmjs webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-lrmjs 84f330ab-933c-4c45-8746-615b70ccf7d6 22545874 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c1a17 0xc0045c1a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.14,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3df73f747b7cc0cfe323c74f24c731af42858878c8a41ca6efb4c7fb4b0709ce,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.409: INFO: Pod "webserver-deployment-595b5b9587-mxkht" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mxkht webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-mxkht abcbdb00-7c33-41b0-8f2c-a2982b3ad8af 22546066 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c1b97 0xc0045c1b98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-07 22:18:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.409: INFO: Pod "webserver-deployment-595b5b9587-n4w7w" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n4w7w webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-n4w7w ebe7b566-4a8d-485a-a0c2-1f16e726df84 22545916 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c1d67 0xc0045c1d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.128,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e9782b15c6b826fdadda3bbcb44f238464291632f750aed536166d9eb75d91c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.128,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.409: INFO: Pod "webserver-deployment-595b5b9587-ptsr5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ptsr5 webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-ptsr5 dcbbc101-b237-4ab2-a3bf-d9dd2c971989 22546035 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc0045c1fa7 0xc0045c1fa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.409: INFO: Pod "webserver-deployment-595b5b9587-q4666" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q4666 webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-q4666 0ad16530-052e-442e-baf4-aa1a1af479c4 22546031 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc00459e157 0xc00459e158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.409: INFO: Pod "webserver-deployment-595b5b9587-s6wm7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s6wm7 webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-s6wm7 3a60c631-9f88-4a20-8c5a-2c45cd022b28 22546034 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc00459e2a7 0xc00459e2a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.409: INFO: Pod "webserver-deployment-595b5b9587-xfmnk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xfmnk webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-xfmnk f3a32dd9-4454-45bd-b0bf-d34e34487907 22545876 0 2020-06-07 22:18:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc00459e3f7 0xc00459e3f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.126,StartTime:2020-06-07 22:18:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:18:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ca8dbe5c9ac191ff2a26a05ae15532e509f45bf614836eeeab673926114b5287,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-595b5b9587-zvwgk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zvwgk webserver-deployment-595b5b9587- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-595b5b9587-zvwgk b2d28e95-3e2e-44ac-ac04-050bc08e6a86 22546049 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 aa58c0d5-ab93-4436-9d15-eb1fd5414e39 0xc00459e5d7 0xc00459e5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-c7997dcc8-4sqwg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4sqwg webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-4sqwg 0f90daca-fb33-481b-8a7d-cd832dee26e0 22546032 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459e767 0xc00459e768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-c7997dcc8-cn89j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cn89j webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-cn89j 2f60182b-3892-4fb7-a3a4-ae1c093e88a9 22546048 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459e917 0xc00459e918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-c7997dcc8-cxqq4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cxqq4 webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-cxqq4 cbeafea2-d0eb-47a1-8dd9-3b1c530362c9 22545959 0 2020-06-07 22:18:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459ea97 0xc00459ea98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-07 22:18:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-c7997dcc8-dx92l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dx92l webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-dx92l 4747d27f-8fed-45c3-a22d-be5d9cffb730 22545966 0 2020-06-07 22:18:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459ec87 0xc00459ec88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-07 22:18:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-c7997dcc8-g67pk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g67pk webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-g67pk aafaebe6-0893-4e43-9682-9526e6fa9de2 22546063 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459ee47 0xc00459ee48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-c7997dcc8-jbj2c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jbj2c webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-jbj2c 67a90cd7-aa4a-4473-a40c-70b46c5511b3 22546050 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459efb7 0xc00459efb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.410: INFO: Pod "webserver-deployment-c7997dcc8-jthcp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jthcp webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-jthcp 9eec2fc5-c10e-4783-ba66-1c304ad65688 22546046 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459f157 0xc00459f158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.411: INFO: Pod "webserver-deployment-c7997dcc8-kd66n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kd66n webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-kd66n 848d89ee-b81d-427a-ad3b-a24941a8ea9d 22545983 0 2020-06-07 22:18:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459f2d7 0xc00459f2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-07 22:18:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.411: INFO: Pod "webserver-deployment-c7997dcc8-kxfj2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kxfj2 webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-kxfj2 ce38ffe8-f591-4b82-9d39-44e3449ff747 22546020 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459f527 0xc00459f528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.411: INFO: Pod "webserver-deployment-c7997dcc8-rnst5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rnst5 webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-rnst5 ccc2d26b-458e-426f-b417-add270096709 22546013 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459f727 0xc00459f728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.411: INFO: Pod "webserver-deployment-c7997dcc8-sjg4f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sjg4f webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-sjg4f d21750a5-9dee-4b05-b4ac-e863896a51c4 22545957 0 2020-06-07 22:18:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459f8a7 0xc00459f8a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-06-07 22:18:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.411: INFO: Pod "webserver-deployment-c7997dcc8-t64lm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t64lm webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-t64lm aebc6542-5db1-4450-ad6a-285a76f65907 22546045 0 2020-06-07 22:18:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459fb87 0xc00459fb88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 7 22:18:41.411: INFO: Pod "webserver-deployment-c7997dcc8-zqx9x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zqx9x webserver-deployment-c7997dcc8- deployment-7343 /api/v1/namespaces/deployment-7343/pods/webserver-deployment-c7997dcc8-zqx9x 1084a248-d4e4-4e35-9ec8-fc209288a97c 22545982 0 2020-06-07 22:18:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 98ce7318-ef3e-4254-b0b3-2344afac1671 0xc00459fd87 0xc00459fd88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdrkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdrkj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdrkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:18:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-06-07 22:18:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:18:41.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7343" for this suite. • [SLOW TEST:15.557 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":266,"skipped":4317,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:18:41.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7990.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7990.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7990.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7990.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 22:19:06.306: INFO: DNS probes using dns-7990/dns-test-14bc4f61-f6f9-4155-9eeb-0e7ebe9e901a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:19:06.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7990" for this suite. • [SLOW TEST:25.072 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":267,"skipped":4321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:19:06.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bb790d90-d12a-4000-823f-d185553abb0c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bb790d90-d12a-4000-823f-d185553abb0c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:20:35.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7815" for this suite. • [SLOW TEST:88.941 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:20:35.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 22:20:36.122: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 22:20:38.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:20:40.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 22:20:43.352: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 7 22:20:43.373: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:20:43.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8701" for this suite. STEP: Destroying namespace "webhook-8701-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.883 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":269,"skipped":4397,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:20:43.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jun 7 22:20:43.515: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:20:49.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6962" for this suite. • [SLOW TEST:6.488 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":270,"skipped":4416,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:20:49.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:20:50.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6467" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":271,"skipped":4419,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:20:50.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jun 7 22:20:50.388: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 7 22:20:55.393: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 7 22:20:55.393: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 7 22:20:57.396: INFO: Creating deployment "test-rollover-deployment" Jun 7 22:20:57.401: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 7 22:20:59.406: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 7 22:20:59.411: INFO: Ensure that both replica sets have 1 created replica Jun 7 22:20:59.416: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 7 22:20:59.423: INFO: Updating deployment test-rollover-deployment Jun 7 22:20:59.423: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 7 22:21:01.457: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 7 22:21:01.464: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 7 22:21:01.470: INFO: all replica sets need to contain the pod-template-hash label Jun 7 22:21:01.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165259, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:21:03.478: INFO: all replica sets need to contain the pod-template-hash label Jun 7 22:21:03.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165262, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:21:05.477: INFO: all replica sets need to contain the pod-template-hash label Jun 7 22:21:05.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165262, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:21:07.478: INFO: all replica sets need to contain the pod-template-hash label Jun 7 22:21:07.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165262, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:21:09.476: INFO: all replica sets need to contain the pod-template-hash label Jun 7 22:21:09.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165262, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:21:11.478: INFO: all replica sets need to contain the pod-template-hash label Jun 7 22:21:11.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165262, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165257, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 7 22:21:13.481: INFO: Jun 7 22:21:13.481: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jun 7 22:21:13.488: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6959 /apis/apps/v1/namespaces/deployment-6959/deployments/test-rollover-deployment 1421a493-1e60-4996-b027-728ef14b2d83 22547009 2 2020-06-07 22:20:57 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a44b78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-07 22:20:57 +0000 UTC,LastTransitionTime:2020-06-07 22:20:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-06-07 22:21:13 +0000 UTC,LastTransitionTime:2020-06-07 22:20:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 7 22:21:13.491: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6959 /apis/apps/v1/namespaces/deployment-6959/replicasets/test-rollover-deployment-574d6dfbff 54f24460-1dda-4dc0-9211-4547c510d62a 22546997 2 2020-06-07 22:20:59 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 1421a493-1e60-4996-b027-728ef14b2d83 0xc003bf0367 0xc003bf0368}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003bf03d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 7 22:21:13.491: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 7 22:21:13.491: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6959 /apis/apps/v1/namespaces/deployment-6959/replicasets/test-rollover-controller 13926cd3-f581-453e-872a-6a1d2ce9ca00 22547007 2 2020-06-07 22:20:50 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 1421a493-1e60-4996-b027-728ef14b2d83 0xc003bf027f 0xc003bf0290}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003bf02f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 7 22:21:13.491: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6959 /apis/apps/v1/namespaces/deployment-6959/replicasets/test-rollover-deployment-f6c94f66c f666acb1-896a-4b65-9a56-d8a47c293957 22546942 2 2020-06-07 22:20:57 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 1421a493-1e60-4996-b027-728ef14b2d83 0xc003bf0440 0xc003bf0441}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003bf04b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 7 22:21:13.494: INFO: Pod "test-rollover-deployment-574d6dfbff-zqnk8" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-zqnk8 test-rollover-deployment-574d6dfbff- deployment-6959 /api/v1/namespaces/deployment-6959/pods/test-rollover-deployment-574d6dfbff-zqnk8 44fa2301-5509-452d-b4e6-17c3868a7d92 22546966 0 2020-06-07 22:20:59 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 54f24460-1dda-4dc0-9211-4547c510d62a 0xc003bf09e7 0xc003bf09e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k74cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k74cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k74cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:20:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:21:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:21:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-07 22:20:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.35,StartTime:2020-06-07 22:20:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-07 22:21:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://79c020d8b7e279d307ee44a2a1e5adcd41bada74259e9d37162bd79c1dcde180,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:21:13.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6959" for this suite. • [SLOW TEST:23.434 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":272,"skipped":4430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:21:13.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 7 22:21:14.365: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 7 22:21:16.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165274, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165274, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165274, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727165274, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 7 22:21:19.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:21:31.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5585" for this suite. STEP: Destroying namespace "webhook-5585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.231 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":273,"skipped":4460,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:21:31.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:21:38.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6120" for this suite. • [SLOW TEST:7.082 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":274,"skipped":4461,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:21:38.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:21:44.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6667" for this suite. • [SLOW TEST:6.199 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":275,"skipped":4477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:21:45.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8299.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8299.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8299.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8299.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8299.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 57.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.57_udp@PTR;check="$$(dig +tcp +noall +answer +search 57.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.57_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8299.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8299.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8299.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8299.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8299.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8299.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 57.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.57_udp@PTR;check="$$(dig +tcp +noall +answer +search 57.83.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.83.57_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 7 22:21:51.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.723: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.727: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.746: INFO: Unable to read jessie_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.749: INFO: Unable to read jessie_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.754: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:51.770: INFO: Lookups using dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0 failed for: [wheezy_udp@dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_udp@dns-test-service.dns-8299.svc.cluster.local jessie_tcp@dns-test-service.dns-8299.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local] Jun 7 22:21:56.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.783: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.787: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.808: INFO: Unable to read jessie_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.811: INFO: Unable to read jessie_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.814: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.816: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:21:56.836: INFO: Lookups using dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0 failed for: [wheezy_udp@dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_udp@dns-test-service.dns-8299.svc.cluster.local jessie_tcp@dns-test-service.dns-8299.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local] Jun 7 22:22:01.776: INFO: Unable to read wheezy_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.782: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.786: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.803: INFO: Unable to read jessie_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.805: INFO: Unable to read jessie_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.808: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.812: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:01.828: INFO: Lookups using dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0 failed for: [wheezy_udp@dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_udp@dns-test-service.dns-8299.svc.cluster.local jessie_tcp@dns-test-service.dns-8299.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local] Jun 7 22:22:06.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.782: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.785: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.807: INFO: Unable to read jessie_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.810: INFO: Unable to read jessie_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.814: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.817: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:06.838: INFO: Lookups using dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0 failed for: [wheezy_udp@dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_udp@dns-test-service.dns-8299.svc.cluster.local jessie_tcp@dns-test-service.dns-8299.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local] Jun 7 22:22:11.776: INFO: Unable to read wheezy_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.784: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.787: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.807: INFO: Unable to read jessie_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.810: INFO: Unable to read jessie_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.813: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.816: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:11.835: INFO: Lookups using dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0 failed for: [wheezy_udp@dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_udp@dns-test-service.dns-8299.svc.cluster.local jessie_tcp@dns-test-service.dns-8299.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local] Jun 7 22:22:16.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.780: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.783: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.817: INFO: Unable to read jessie_udp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.820: INFO: Unable to read jessie_tcp@dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.823: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.825: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local from pod dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0: the server could not find the requested resource (get pods dns-test-3613b51c-ecde-472a-82da-3a8c576866c0) Jun 7 22:22:16.842: INFO: Lookups using dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0 failed for: [wheezy_udp@dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@dns-test-service.dns-8299.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_udp@dns-test-service.dns-8299.svc.cluster.local jessie_tcp@dns-test-service.dns-8299.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8299.svc.cluster.local] Jun 7 22:22:21.836: INFO: DNS probes using dns-8299/dns-test-3613b51c-ecde-472a-82da-3a8c576866c0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:22:22.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8299" for this suite. • [SLOW TEST:37.931 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":276,"skipped":4509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:22:22.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-626ae56a-99b0-4129-b8b7-494e46455190 STEP: Creating a pod to test consume configMaps Jun 7 22:22:23.031: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60" in namespace "projected-8403" to be "success or failure" Jun 7 22:22:23.050: INFO: Pod "pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60": Phase="Pending", Reason="", readiness=false. Elapsed: 19.601052ms Jun 7 22:22:25.133: INFO: Pod "pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101713343s Jun 7 22:22:27.140: INFO: Pod "pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109059817s STEP: Saw pod success Jun 7 22:22:27.140: INFO: Pod "pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60" satisfied condition "success or failure" Jun 7 22:22:27.143: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60 container projected-configmap-volume-test: STEP: delete the pod Jun 7 22:22:27.278: INFO: Waiting for pod pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60 to disappear Jun 7 22:22:27.305: INFO: Pod pod-projected-configmaps-98eba0e5-de47-47e3-8c67-39b77bb76c60 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:22:27.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8403" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4551,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jun 7 22:22:27.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 7 22:22:32.493: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jun 7 22:22:32.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1535" for this suite. • [SLOW TEST:5.226 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":278,"skipped":4560,"failed":0} SSSSJun 7 22:22:32.604: INFO: Running AfterSuite actions on all nodes Jun 7 22:22:32.604: INFO: Running AfterSuite actions on node 1 Jun 7 22:22:32.604: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4430.367 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS