I0701 12:28:28.573614 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0701 12:28:28.573813 6 e2e.go:109] Starting e2e run "b516189c-e2f4-41a0-94e6-e7a4b8058bb4" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593606507 - Will randomize all specs Will run 278 of 4842 specs Jul 1 12:28:28.627: INFO: >>> kubeConfig: /root/.kube/config Jul 1 12:28:28.631: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 1 12:28:28.652: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 1 12:28:28.680: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 1 12:28:28.680: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 1 12:28:28.680: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 1 12:28:28.690: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 1 12:28:28.690: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 1 12:28:28.690: INFO: e2e test version: v1.17.4 Jul 1 12:28:28.691: INFO: kube-apiserver version: v1.17.2 Jul 1 12:28:28.691: INFO: >>> kubeConfig: /root/.kube/config Jul 1 12:28:28.697: INFO: Cluster IP family: ipv4 SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:28:28.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Jul 1 12:28:29.212: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-899d656d-7a8b-468a-b1d6-2864b0149208 STEP: Creating a pod to test consume secrets Jul 1 12:28:29.409: INFO: Waiting up to 5m0s for pod "pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd" in namespace "secrets-8999" to be "success or failure" Jul 1 12:28:29.494: INFO: Pod "pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 84.795358ms Jul 1 12:28:31.498: INFO: Pod "pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08858913s Jul 1 12:28:33.501: INFO: Pod "pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092204943s Jul 1 12:28:35.506: INFO: Pod "pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096653757s STEP: Saw pod success Jul 1 12:28:35.506: INFO: Pod "pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd" satisfied condition "success or failure" Jul 1 12:28:35.510: INFO: Trying to get logs from node jerma-worker pod pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd container secret-volume-test: STEP: delete the pod Jul 1 12:28:35.590: INFO: Waiting for pod pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd to disappear Jul 1 12:28:35.594: INFO: Pod pod-secrets-15c9e3ff-a112-4308-9082-b3dce38ed4dd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:28:35.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8999" for this suite. • [SLOW TEST:6.906 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":3,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:28:35.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-9d90f875-9a3c-4ff2-b71c-e26ddcac0787 STEP: Creating secret with name s-test-opt-upd-815a2678-f17d-4f04-bc09-210301a59bc7 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9d90f875-9a3c-4ff2-b71c-e26ddcac0787 STEP: Updating secret s-test-opt-upd-815a2678-f17d-4f04-bc09-210301a59bc7 STEP: Creating secret with name s-test-opt-create-1ba4f617-5c2f-42f2-8be4-06feb290d2bf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:28:45.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1886" for this suite. • [SLOW TEST:10.262 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":10,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:28:45.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-dd639227-5782-4dbd-988d-36f010ec1e06 STEP: Creating secret with name s-test-opt-upd-b3472628-206f-432f-8877-4b990ad53541 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dd639227-5782-4dbd-988d-36f010ec1e06 STEP: Updating secret s-test-opt-upd-b3472628-206f-432f-8877-4b990ad53541 STEP: Creating secret with name s-test-opt-create-65038e9a-fc4b-463d-8778-559c0a8c6892 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:30:19.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3389" for this suite. • [SLOW TEST:93.465 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:30:19.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jul 1 12:30:19.912: INFO: >>> kubeConfig: /root/.kube/config Jul 1 12:30:23.082: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:30:33.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3472" for this suite. • [SLOW TEST:14.606 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":4,"skipped":54,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:30:33.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6812 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6812 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6812 Jul 1 12:30:34.020: INFO: Found 0 stateful pods, waiting for 1 Jul 1 12:30:44.025: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 1 12:30:44.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 12:30:47.412: INFO: stderr: "I0701 12:30:47.255787 33 log.go:172] (0xc0000f4f20) (0xc0001366e0) Create stream\nI0701 12:30:47.255863 33 log.go:172] (0xc0000f4f20) (0xc0001366e0) Stream added, broadcasting: 1\nI0701 12:30:47.258699 33 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0701 12:30:47.258734 33 log.go:172] (0xc0000f4f20) (0xc000136a00) Create stream\nI0701 12:30:47.258744 33 log.go:172] (0xc0000f4f20) (0xc000136a00) Stream added, broadcasting: 3\nI0701 12:30:47.259762 33 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0701 12:30:47.259789 33 log.go:172] (0xc0000f4f20) (0xc000669ae0) Create stream\nI0701 12:30:47.259799 33 log.go:172] (0xc0000f4f20) (0xc000669ae0) Stream added, broadcasting: 5\nI0701 12:30:47.260840 33 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0701 12:30:47.371351 33 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0701 12:30:47.371374 33 log.go:172] (0xc000669ae0) (5) Data frame handling\nI0701 12:30:47.371387 33 log.go:172] (0xc000669ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:30:47.405344 33 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0701 12:30:47.405388 33 log.go:172] (0xc000136a00) (3) Data frame handling\nI0701 12:30:47.405414 33 log.go:172] (0xc000136a00) (3) Data frame sent\nI0701 12:30:47.405493 33 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0701 12:30:47.405501 33 log.go:172] (0xc000136a00) (3) Data frame handling\nI0701 12:30:47.405518 33 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0701 12:30:47.405565 33 log.go:172] (0xc000669ae0) (5) Data frame handling\nI0701 12:30:47.406896 33 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0701 12:30:47.406916 33 log.go:172] (0xc0001366e0) (1) Data frame handling\nI0701 12:30:47.406931 33 log.go:172] (0xc0001366e0) (1) Data frame sent\nI0701 12:30:47.406944 33 log.go:172] (0xc0000f4f20) (0xc0001366e0) Stream removed, broadcasting: 1\nI0701 12:30:47.406962 33 log.go:172] (0xc0000f4f20) Go away received\nI0701 12:30:47.407350 33 log.go:172] (0xc0000f4f20) (0xc0001366e0) Stream removed, broadcasting: 1\nI0701 12:30:47.407367 33 log.go:172] (0xc0000f4f20) (0xc000136a00) Stream removed, broadcasting: 3\nI0701 12:30:47.407377 33 log.go:172] (0xc0000f4f20) (0xc000669ae0) Stream removed, broadcasting: 5\n" Jul 1 12:30:47.412: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 12:30:47.412: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 12:30:47.415: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 1 12:30:57.625: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 12:30:57.625: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 12:30:57.716: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999519s Jul 1 12:30:58.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982762852s Jul 1 12:30:59.746: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970420835s Jul 1 12:31:00.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.952579004s Jul 1 12:31:01.850: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.9460926s Jul 1 12:31:02.855: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.848210893s Jul 1 12:31:03.858: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.843323676s Jul 1 12:31:04.862: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.840070704s Jul 1 12:31:05.891: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.836411325s Jul 1 12:31:06.895: INFO: Verifying statefulset ss doesn't scale past 1 for another 807.723886ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6812 Jul 1 12:31:07.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 12:31:08.313: INFO: stderr: "I0701 12:31:08.167021 63 log.go:172] (0xc000104b00) (0xc0007c2140) Create stream\nI0701 12:31:08.167085 63 log.go:172] (0xc000104b00) (0xc0007c2140) Stream added, broadcasting: 1\nI0701 12:31:08.169994 63 log.go:172] (0xc000104b00) Reply frame received for 1\nI0701 12:31:08.170038 63 log.go:172] (0xc000104b00) (0xc0005bb540) Create stream\nI0701 12:31:08.170048 63 log.go:172] (0xc000104b00) (0xc0005bb540) Stream added, broadcasting: 3\nI0701 12:31:08.170954 63 log.go:172] (0xc000104b00) Reply frame received for 3\nI0701 12:31:08.170999 63 log.go:172] (0xc000104b00) (0xc0007c21e0) Create stream\nI0701 12:31:08.171012 63 log.go:172] (0xc000104b00) (0xc0007c21e0) Stream added, broadcasting: 5\nI0701 12:31:08.171847 63 log.go:172] (0xc000104b00) Reply frame received for 5\nI0701 12:31:08.231973 63 log.go:172] (0xc000104b00) Data frame received for 5\nI0701 12:31:08.231995 63 log.go:172] (0xc0007c21e0) (5) Data frame handling\nI0701 12:31:08.232008 63 log.go:172] (0xc0007c21e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 12:31:08.304579 63 log.go:172] (0xc000104b00) Data frame received for 5\nI0701 12:31:08.304642 63 log.go:172] (0xc0007c21e0) (5) Data frame handling\nI0701 12:31:08.304678 63 log.go:172] (0xc000104b00) Data frame received for 3\nI0701 12:31:08.304713 63 log.go:172] (0xc0005bb540) (3) Data frame handling\nI0701 12:31:08.304738 63 log.go:172] (0xc0005bb540) (3) Data frame sent\nI0701 12:31:08.304752 63 log.go:172] (0xc000104b00) Data frame received for 3\nI0701 12:31:08.304766 63 log.go:172] (0xc0005bb540) (3) Data frame handling\nI0701 12:31:08.306697 63 log.go:172] (0xc000104b00) Data frame received for 1\nI0701 12:31:08.306728 63 log.go:172] (0xc0007c2140) (1) Data frame handling\nI0701 12:31:08.306745 63 log.go:172] (0xc0007c2140) (1) Data frame sent\nI0701 12:31:08.306956 63 log.go:172] (0xc000104b00) (0xc0007c2140) Stream removed, broadcasting: 1\nI0701 12:31:08.307025 63 log.go:172] (0xc000104b00) Go away received\nI0701 12:31:08.307242 63 log.go:172] (0xc000104b00) (0xc0007c2140) Stream removed, broadcasting: 1\nI0701 12:31:08.307254 63 log.go:172] (0xc000104b00) (0xc0005bb540) Stream removed, broadcasting: 3\nI0701 12:31:08.307260 63 log.go:172] (0xc000104b00) (0xc0007c21e0) Stream removed, broadcasting: 5\n" Jul 1 12:31:08.313: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 12:31:08.313: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 12:31:08.355: INFO: Found 1 stateful pods, waiting for 3 Jul 1 12:31:18.482: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:31:18.482: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:31:18.482: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 12:31:28.360: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:31:28.360: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:31:28.360: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 1 12:31:28.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 12:31:28.580: INFO: stderr: "I0701 12:31:28.502584 84 log.go:172] (0xc0003c2dc0) (0xc0006ef9a0) Create stream\nI0701 12:31:28.502631 84 log.go:172] (0xc0003c2dc0) (0xc0006ef9a0) Stream added, broadcasting: 1\nI0701 12:31:28.504680 84 log.go:172] (0xc0003c2dc0) Reply frame received for 1\nI0701 12:31:28.504717 84 log.go:172] (0xc0003c2dc0) (0xc00093e000) Create stream\nI0701 12:31:28.504726 84 log.go:172] (0xc0003c2dc0) (0xc00093e000) Stream added, broadcasting: 3\nI0701 12:31:28.505496 84 log.go:172] (0xc0003c2dc0) Reply frame received for 3\nI0701 12:31:28.505522 84 log.go:172] (0xc0003c2dc0) (0xc0006efb80) Create stream\nI0701 12:31:28.505528 84 log.go:172] (0xc0003c2dc0) (0xc0006efb80) Stream added, broadcasting: 5\nI0701 12:31:28.506100 84 log.go:172] (0xc0003c2dc0) Reply frame received for 5\nI0701 12:31:28.572267 84 log.go:172] (0xc0003c2dc0) Data frame received for 5\nI0701 12:31:28.572324 84 log.go:172] (0xc0006efb80) (5) Data frame handling\nI0701 12:31:28.572348 84 log.go:172] (0xc0006efb80) (5) Data frame sent\nI0701 12:31:28.572364 84 log.go:172] (0xc0003c2dc0) Data frame received for 5\nI0701 12:31:28.572379 84 log.go:172] (0xc0006efb80) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:31:28.572422 84 log.go:172] (0xc0003c2dc0) Data frame received for 3\nI0701 12:31:28.572439 84 log.go:172] (0xc00093e000) (3) Data frame handling\nI0701 12:31:28.572460 84 log.go:172] (0xc00093e000) (3) Data frame sent\nI0701 12:31:28.572500 84 log.go:172] (0xc0003c2dc0) Data frame received for 3\nI0701 12:31:28.572525 84 log.go:172] (0xc00093e000) (3) Data frame handling\nI0701 12:31:28.574094 84 log.go:172] (0xc0003c2dc0) Data frame received for 1\nI0701 12:31:28.574120 84 log.go:172] (0xc0006ef9a0) (1) Data frame handling\nI0701 12:31:28.574140 84 log.go:172] (0xc0006ef9a0) (1) Data frame sent\nI0701 12:31:28.574154 84 log.go:172] (0xc0003c2dc0) (0xc0006ef9a0) Stream removed, broadcasting: 1\nI0701 12:31:28.574168 84 log.go:172] (0xc0003c2dc0) Go away received\nI0701 12:31:28.574529 84 log.go:172] (0xc0003c2dc0) (0xc0006ef9a0) Stream removed, broadcasting: 1\nI0701 12:31:28.574549 84 log.go:172] (0xc0003c2dc0) (0xc00093e000) Stream removed, broadcasting: 3\nI0701 12:31:28.574558 84 log.go:172] (0xc0003c2dc0) (0xc0006efb80) Stream removed, broadcasting: 5\n" Jul 1 12:31:28.580: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 12:31:28.580: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 12:31:28.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 12:31:28.819: INFO: stderr: "I0701 12:31:28.698530 106 log.go:172] (0xc0009d2e70) (0xc000697d60) Create stream\nI0701 12:31:28.698603 106 log.go:172] (0xc0009d2e70) (0xc000697d60) Stream added, broadcasting: 1\nI0701 12:31:28.702225 106 log.go:172] (0xc0009d2e70) Reply frame received for 1\nI0701 12:31:28.702288 106 log.go:172] (0xc0009d2e70) (0xc000982000) Create stream\nI0701 12:31:28.702315 106 log.go:172] (0xc0009d2e70) (0xc000982000) Stream added, broadcasting: 3\nI0701 12:31:28.703352 106 log.go:172] (0xc0009d2e70) Reply frame received for 3\nI0701 12:31:28.703404 106 log.go:172] (0xc0009d2e70) (0xc000697e00) Create stream\nI0701 12:31:28.703420 106 log.go:172] (0xc0009d2e70) (0xc000697e00) Stream added, broadcasting: 5\nI0701 12:31:28.704483 106 log.go:172] (0xc0009d2e70) Reply frame received for 5\nI0701 12:31:28.765642 106 log.go:172] (0xc0009d2e70) Data frame received for 5\nI0701 12:31:28.765663 106 log.go:172] (0xc000697e00) (5) Data frame handling\nI0701 12:31:28.765734 106 log.go:172] (0xc000697e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:31:28.807257 106 log.go:172] (0xc0009d2e70) Data frame received for 3\nI0701 12:31:28.807299 106 log.go:172] (0xc000982000) (3) Data frame handling\nI0701 12:31:28.807438 106 log.go:172] (0xc000982000) (3) Data frame sent\nI0701 12:31:28.807763 106 log.go:172] (0xc0009d2e70) Data frame received for 3\nI0701 12:31:28.807783 106 log.go:172] (0xc000982000) (3) Data frame handling\nI0701 12:31:28.808139 106 log.go:172] (0xc0009d2e70) Data frame received for 5\nI0701 12:31:28.808167 106 log.go:172] (0xc000697e00) (5) Data frame handling\nI0701 12:31:28.810198 106 log.go:172] (0xc0009d2e70) Data frame received for 1\nI0701 12:31:28.810209 106 log.go:172] (0xc000697d60) (1) Data frame handling\nI0701 12:31:28.810216 106 log.go:172] (0xc000697d60) (1) Data frame sent\nI0701 12:31:28.810225 106 log.go:172] (0xc0009d2e70) (0xc000697d60) Stream removed, broadcasting: 1\nI0701 12:31:28.810489 106 log.go:172] (0xc0009d2e70) (0xc000697d60) Stream removed, broadcasting: 1\nI0701 12:31:28.810511 106 log.go:172] (0xc0009d2e70) (0xc000982000) Stream removed, broadcasting: 3\nI0701 12:31:28.810570 106 log.go:172] (0xc0009d2e70) Go away received\nI0701 12:31:28.810638 106 log.go:172] (0xc0009d2e70) (0xc000697e00) Stream removed, broadcasting: 5\n" Jul 1 12:31:28.819: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 12:31:28.819: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 12:31:28.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 12:31:29.284: INFO: stderr: "I0701 12:31:29.112321 125 log.go:172] (0xc0007700b0) (0xc00073c140) Create stream\nI0701 12:31:29.112385 125 log.go:172] (0xc0007700b0) (0xc00073c140) Stream added, broadcasting: 1\nI0701 12:31:29.115460 125 log.go:172] (0xc0007700b0) Reply frame received for 1\nI0701 12:31:29.115528 125 log.go:172] (0xc0007700b0) (0xc000306000) Create stream\nI0701 12:31:29.115553 125 log.go:172] (0xc0007700b0) (0xc000306000) Stream added, broadcasting: 3\nI0701 12:31:29.116410 125 log.go:172] (0xc0007700b0) Reply frame received for 3\nI0701 12:31:29.116444 125 log.go:172] (0xc0007700b0) (0xc00073c1e0) Create stream\nI0701 12:31:29.116454 125 log.go:172] (0xc0007700b0) (0xc00073c1e0) Stream added, broadcasting: 5\nI0701 12:31:29.117453 125 log.go:172] (0xc0007700b0) Reply frame received for 5\nI0701 12:31:29.179976 125 log.go:172] (0xc0007700b0) Data frame received for 5\nI0701 12:31:29.180012 125 log.go:172] (0xc00073c1e0) (5) Data frame handling\nI0701 12:31:29.180037 125 log.go:172] (0xc00073c1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:31:29.275915 125 log.go:172] (0xc0007700b0) Data frame received for 5\nI0701 12:31:29.275939 125 log.go:172] (0xc00073c1e0) (5) Data frame handling\nI0701 12:31:29.276002 125 log.go:172] (0xc0007700b0) Data frame received for 3\nI0701 12:31:29.276046 125 log.go:172] (0xc000306000) (3) Data frame handling\nI0701 12:31:29.276075 125 log.go:172] (0xc000306000) (3) Data frame sent\nI0701 12:31:29.276093 125 log.go:172] (0xc0007700b0) Data frame received for 3\nI0701 12:31:29.276108 125 log.go:172] (0xc000306000) (3) Data frame handling\nI0701 12:31:29.277683 125 log.go:172] (0xc0007700b0) Data frame received for 1\nI0701 12:31:29.277704 125 log.go:172] (0xc00073c140) (1) Data frame handling\nI0701 12:31:29.277715 125 log.go:172] (0xc00073c140) (1) Data frame sent\nI0701 12:31:29.277726 125 log.go:172] (0xc0007700b0) (0xc00073c140) Stream removed, broadcasting: 1\nI0701 12:31:29.277759 125 log.go:172] (0xc0007700b0) Go away received\nI0701 12:31:29.278029 125 log.go:172] (0xc0007700b0) (0xc00073c140) Stream removed, broadcasting: 1\nI0701 12:31:29.278045 125 log.go:172] (0xc0007700b0) (0xc000306000) Stream removed, broadcasting: 3\nI0701 12:31:29.278052 125 log.go:172] (0xc0007700b0) (0xc00073c1e0) Stream removed, broadcasting: 5\n" Jul 1 12:31:29.284: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 12:31:29.284: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 12:31:29.284: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 12:31:29.287: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jul 1 12:31:39.294: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 12:31:39.294: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 1 12:31:39.294: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 1 12:31:39.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999695s Jul 1 12:31:40.326: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976659448s Jul 1 12:31:41.332: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972181414s Jul 1 12:31:42.386: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.96639504s Jul 1 12:31:43.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.911695659s Jul 1 12:31:44.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.710234681s Jul 1 12:31:45.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.70418788s Jul 1 12:31:46.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.497914992s Jul 1 12:31:47.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.493844026s Jul 1 12:31:48.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 489.683424ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6812 Jul 1 12:31:49.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 12:31:50.075: INFO: stderr: "I0701 12:31:50.009277 146 log.go:172] (0xc000912000) (0xc000829ae0) Create stream\nI0701 12:31:50.009335 146 log.go:172] (0xc000912000) (0xc000829ae0) Stream added, broadcasting: 1\nI0701 12:31:50.011143 146 log.go:172] (0xc000912000) Reply frame received for 1\nI0701 12:31:50.011178 146 log.go:172] (0xc000912000) (0xc0005534a0) Create stream\nI0701 12:31:50.011192 146 log.go:172] (0xc000912000) (0xc0005534a0) Stream added, broadcasting: 3\nI0701 12:31:50.012046 146 log.go:172] (0xc000912000) Reply frame received for 3\nI0701 12:31:50.012073 146 log.go:172] (0xc000912000) (0xc000829cc0) Create stream\nI0701 12:31:50.012082 146 log.go:172] (0xc000912000) (0xc000829cc0) Stream added, broadcasting: 5\nI0701 12:31:50.012940 146 log.go:172] (0xc000912000) Reply frame received for 5\nI0701 12:31:50.070246 146 log.go:172] (0xc000912000) Data frame received for 5\nI0701 12:31:50.070268 146 log.go:172] (0xc000829cc0) (5) Data frame handling\nI0701 12:31:50.070276 146 log.go:172] (0xc000829cc0) (5) Data frame sent\nI0701 12:31:50.070281 146 log.go:172] (0xc000912000) Data frame received for 5\nI0701 12:31:50.070284 146 log.go:172] (0xc000829cc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 12:31:50.070300 146 log.go:172] (0xc000912000) Data frame received for 3\nI0701 12:31:50.070306 146 log.go:172] (0xc0005534a0) (3) Data frame handling\nI0701 12:31:50.070311 146 log.go:172] (0xc0005534a0) (3) Data frame sent\nI0701 12:31:50.070318 146 log.go:172] (0xc000912000) Data frame received for 3\nI0701 12:31:50.070321 146 log.go:172] (0xc0005534a0) (3) Data frame handling\nI0701 12:31:50.071136 146 log.go:172] (0xc000912000) Data frame received for 1\nI0701 12:31:50.071153 146 log.go:172] (0xc000829ae0) (1) Data frame handling\nI0701 12:31:50.071162 146 log.go:172] (0xc000829ae0) (1) Data frame sent\nI0701 12:31:50.071175 146 log.go:172] (0xc000912000) (0xc000829ae0) Stream removed, broadcasting: 1\nI0701 12:31:50.071193 146 log.go:172] (0xc000912000) Go away received\nI0701 12:31:50.071413 146 log.go:172] (0xc000912000) (0xc000829ae0) Stream removed, broadcasting: 1\nI0701 12:31:50.071428 146 log.go:172] (0xc000912000) (0xc0005534a0) Stream removed, broadcasting: 3\nI0701 12:31:50.071435 146 log.go:172] (0xc000912000) (0xc000829cc0) Stream removed, broadcasting: 5\n" Jul 1 12:31:50.075: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 12:31:50.075: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 12:31:50.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 12:31:50.264: INFO: stderr: "I0701 12:31:50.186625 167 log.go:172] (0xc0006e8840) (0xc0006e4000) Create stream\nI0701 12:31:50.186688 167 log.go:172] (0xc0006e8840) (0xc0006e4000) Stream added, broadcasting: 1\nI0701 12:31:50.189601 167 log.go:172] (0xc0006e8840) Reply frame received for 1\nI0701 12:31:50.189632 167 log.go:172] (0xc0006e8840) (0xc00069fae0) Create stream\nI0701 12:31:50.189644 167 log.go:172] (0xc0006e8840) (0xc00069fae0) Stream added, broadcasting: 3\nI0701 12:31:50.190549 167 log.go:172] (0xc0006e8840) Reply frame received for 3\nI0701 12:31:50.190591 167 log.go:172] (0xc0006e8840) (0xc0006e40a0) Create stream\nI0701 12:31:50.190603 167 log.go:172] (0xc0006e8840) (0xc0006e40a0) Stream added, broadcasting: 5\nI0701 12:31:50.191345 167 log.go:172] (0xc0006e8840) Reply frame received for 5\nI0701 12:31:50.256699 167 log.go:172] (0xc0006e8840) Data frame received for 5\nI0701 12:31:50.256739 167 log.go:172] (0xc0006e40a0) (5) Data frame handling\nI0701 12:31:50.256751 167 log.go:172] (0xc0006e40a0) (5) Data frame sent\nI0701 12:31:50.256785 167 log.go:172] (0xc0006e8840) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 12:31:50.256809 167 log.go:172] (0xc0006e8840) Data frame received for 3\nI0701 12:31:50.256841 167 log.go:172] (0xc00069fae0) (3) Data frame handling\nI0701 12:31:50.256870 167 log.go:172] (0xc00069fae0) (3) Data frame sent\nI0701 12:31:50.256889 167 log.go:172] (0xc0006e8840) Data frame received for 3\nI0701 12:31:50.256908 167 log.go:172] (0xc00069fae0) (3) Data frame handling\nI0701 12:31:50.256933 167 log.go:172] (0xc0006e40a0) (5) Data frame handling\nI0701 12:31:50.258499 167 log.go:172] (0xc0006e8840) Data frame received for 1\nI0701 12:31:50.258521 167 log.go:172] (0xc0006e4000) (1) Data frame handling\nI0701 12:31:50.258534 167 log.go:172] (0xc0006e4000) (1) Data frame sent\nI0701 12:31:50.258549 167 log.go:172] (0xc0006e8840) (0xc0006e4000) Stream removed, broadcasting: 1\nI0701 12:31:50.258808 167 log.go:172] (0xc0006e8840) Go away received\nI0701 12:31:50.258949 167 log.go:172] (0xc0006e8840) (0xc0006e4000) Stream removed, broadcasting: 1\nI0701 12:31:50.258972 167 log.go:172] (0xc0006e8840) (0xc00069fae0) Stream removed, broadcasting: 3\nI0701 12:31:50.258986 167 log.go:172] (0xc0006e8840) (0xc0006e40a0) Stream removed, broadcasting: 5\n" Jul 1 12:31:50.264: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 12:31:50.264: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 12:31:50.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6812 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 12:31:50.463: INFO: stderr: "I0701 12:31:50.389839 188 log.go:172] (0xc0006266e0) (0xc0005bbe00) Create stream\nI0701 12:31:50.389896 188 log.go:172] (0xc0006266e0) (0xc0005bbe00) Stream added, broadcasting: 1\nI0701 12:31:50.392482 188 log.go:172] (0xc0006266e0) Reply frame received for 1\nI0701 12:31:50.392589 188 log.go:172] (0xc0006266e0) (0xc00099c000) Create stream\nI0701 12:31:50.392605 188 log.go:172] (0xc0006266e0) (0xc00099c000) Stream added, broadcasting: 3\nI0701 12:31:50.393506 188 log.go:172] (0xc0006266e0) Reply frame received for 3\nI0701 12:31:50.393545 188 log.go:172] (0xc0006266e0) (0xc000125540) Create stream\nI0701 12:31:50.393565 188 log.go:172] (0xc0006266e0) (0xc000125540) Stream added, broadcasting: 5\nI0701 12:31:50.394280 188 log.go:172] (0xc0006266e0) Reply frame received for 5\nI0701 12:31:50.455883 188 log.go:172] (0xc0006266e0) Data frame received for 3\nI0701 12:31:50.455920 188 log.go:172] (0xc00099c000) (3) Data frame handling\nI0701 12:31:50.455949 188 log.go:172] (0xc00099c000) (3) Data frame sent\nI0701 12:31:50.455971 188 log.go:172] (0xc0006266e0) Data frame received for 3\nI0701 12:31:50.455979 188 log.go:172] (0xc00099c000) (3) Data frame handling\nI0701 12:31:50.456562 188 log.go:172] (0xc0006266e0) Data frame received for 5\nI0701 12:31:50.456576 188 log.go:172] (0xc000125540) (5) Data frame handling\nI0701 12:31:50.456593 188 log.go:172] (0xc000125540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 12:31:50.456607 188 log.go:172] (0xc0006266e0) Data frame received for 5\nI0701 12:31:50.456638 188 log.go:172] (0xc000125540) (5) Data frame handling\nI0701 12:31:50.457616 188 log.go:172] (0xc0006266e0) Data frame received for 1\nI0701 12:31:50.457626 188 log.go:172] (0xc0005bbe00) (1) Data frame handling\nI0701 12:31:50.457641 188 log.go:172] (0xc0005bbe00) (1) Data frame sent\nI0701 12:31:50.457850 188 log.go:172] (0xc0006266e0) (0xc0005bbe00) Stream removed, broadcasting: 1\nI0701 12:31:50.457874 188 log.go:172] (0xc0006266e0) Go away received\nI0701 12:31:50.458294 188 log.go:172] (0xc0006266e0) (0xc0005bbe00) Stream removed, broadcasting: 1\nI0701 12:31:50.458318 188 log.go:172] (0xc0006266e0) (0xc00099c000) Stream removed, broadcasting: 3\nI0701 12:31:50.458331 188 log.go:172] (0xc0006266e0) (0xc000125540) Stream removed, broadcasting: 5\n" Jul 1 12:31:50.463: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 12:31:50.463: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 12:31:50.463: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 1 12:32:10.476: INFO: Deleting all statefulset in ns statefulset-6812 Jul 1 12:32:10.479: INFO: Scaling statefulset ss to 0 Jul 1 12:32:10.488: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 12:32:10.490: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:32:10.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6812" for this suite. • [SLOW TEST:96.606 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":5,"skipped":67,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:32:10.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:32:27.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2174" for this suite. • [SLOW TEST:17.246 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":6,"skipped":73,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:32:27.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-v2h4 STEP: Creating a pod to test atomic-volume-subpath Jul 1 12:32:28.269: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-v2h4" in namespace "subpath-7882" to be "success or failure" Jul 1 12:32:28.453: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Pending", Reason="", readiness=false. Elapsed: 183.998652ms Jul 1 12:32:30.457: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18842372s Jul 1 12:32:32.503: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234127907s Jul 1 12:32:34.573: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303840156s Jul 1 12:32:36.578: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 8.308528787s Jul 1 12:32:38.581: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 10.312434268s Jul 1 12:32:40.586: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 12.317097526s Jul 1 12:32:42.591: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 14.321955967s Jul 1 12:32:44.595: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 16.326183625s Jul 1 12:32:46.641: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 18.371485384s Jul 1 12:32:48.645: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 20.376361553s Jul 1 12:32:50.650: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 22.380787099s Jul 1 12:32:52.654: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 24.385080918s Jul 1 12:32:54.658: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Running", Reason="", readiness=true. Elapsed: 26.388697632s Jul 1 12:32:56.661: INFO: Pod "pod-subpath-test-projected-v2h4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.392132626s STEP: Saw pod success Jul 1 12:32:56.661: INFO: Pod "pod-subpath-test-projected-v2h4" satisfied condition "success or failure" Jul 1 12:32:56.663: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-v2h4 container test-container-subpath-projected-v2h4: STEP: delete the pod Jul 1 12:32:56.708: INFO: Waiting for pod pod-subpath-test-projected-v2h4 to disappear Jul 1 12:32:56.919: INFO: Pod pod-subpath-test-projected-v2h4 no longer exists STEP: Deleting pod pod-subpath-test-projected-v2h4 Jul 1 12:32:56.919: INFO: Deleting pod "pod-subpath-test-projected-v2h4" in namespace "subpath-7882" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:32:56.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7882" for this suite. • [SLOW TEST:29.136 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":7,"skipped":87,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:32:56.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-wn89 STEP: Creating a pod to test atomic-volume-subpath Jul 1 12:32:57.081: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wn89" in namespace "subpath-6192" to be "success or failure" Jul 1 12:32:57.108: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Pending", Reason="", readiness=false. Elapsed: 26.412038ms Jul 1 12:32:59.166: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084892282s Jul 1 12:33:01.170: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 4.089240747s Jul 1 12:33:03.174: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 6.092736829s Jul 1 12:33:05.220: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 8.138723668s Jul 1 12:33:07.238: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 10.156744108s Jul 1 12:33:09.261: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 12.179989862s Jul 1 12:33:11.334: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 14.252328544s Jul 1 12:33:13.338: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 16.256321233s Jul 1 12:33:15.341: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 18.260214673s Jul 1 12:33:17.348: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 20.267125012s Jul 1 12:33:19.352: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Running", Reason="", readiness=true. Elapsed: 22.270831989s Jul 1 12:33:21.356: INFO: Pod "pod-subpath-test-configmap-wn89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.274514528s STEP: Saw pod success Jul 1 12:33:21.356: INFO: Pod "pod-subpath-test-configmap-wn89" satisfied condition "success or failure" Jul 1 12:33:21.360: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-wn89 container test-container-subpath-configmap-wn89: STEP: delete the pod Jul 1 12:33:21.437: INFO: Waiting for pod pod-subpath-test-configmap-wn89 to disappear Jul 1 12:33:21.473: INFO: Pod pod-subpath-test-configmap-wn89 no longer exists STEP: Deleting pod pod-subpath-test-configmap-wn89 Jul 1 12:33:21.474: INFO: Deleting pod "pod-subpath-test-configmap-wn89" in namespace "subpath-6192" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:33:21.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6192" for this suite. • [SLOW TEST:24.577 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":8,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:33:21.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:33:28.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-239" for this suite. • [SLOW TEST:7.333 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":9,"skipped":112,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:33:28.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-650fdf19-eb39-4085-913d-49489b6ece6d STEP: Creating a pod to test consume secrets Jul 1 12:33:28.943: INFO: Waiting up to 5m0s for pod "pod-secrets-497132b7-9a8b-415f-9abc-39db98129643" in namespace "secrets-6148" to be "success or failure" Jul 1 12:33:28.961: INFO: Pod "pod-secrets-497132b7-9a8b-415f-9abc-39db98129643": Phase="Pending", Reason="", readiness=false. Elapsed: 17.684233ms Jul 1 12:33:31.058: INFO: Pod "pod-secrets-497132b7-9a8b-415f-9abc-39db98129643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114998035s Jul 1 12:33:33.063: INFO: Pod "pod-secrets-497132b7-9a8b-415f-9abc-39db98129643": Phase="Running", Reason="", readiness=true. Elapsed: 4.11941324s Jul 1 12:33:35.067: INFO: Pod "pod-secrets-497132b7-9a8b-415f-9abc-39db98129643": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123916384s STEP: Saw pod success Jul 1 12:33:35.067: INFO: Pod "pod-secrets-497132b7-9a8b-415f-9abc-39db98129643" satisfied condition "success or failure" Jul 1 12:33:35.070: INFO: Trying to get logs from node jerma-worker pod pod-secrets-497132b7-9a8b-415f-9abc-39db98129643 container secret-volume-test: STEP: delete the pod Jul 1 12:33:35.111: INFO: Waiting for pod pod-secrets-497132b7-9a8b-415f-9abc-39db98129643 to disappear Jul 1 12:33:35.123: INFO: Pod pod-secrets-497132b7-9a8b-415f-9abc-39db98129643 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:33:35.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6148" for this suite. • [SLOW TEST:6.295 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":116,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:33:35.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 1 12:33:35.229: INFO: Waiting up to 5m0s for pod "pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652" in namespace "emptydir-6381" to be "success or failure" Jul 1 12:33:35.233: INFO: Pod "pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652": Phase="Pending", Reason="", readiness=false. Elapsed: 3.780439ms Jul 1 12:33:37.237: INFO: Pod "pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008119224s Jul 1 12:33:39.240: INFO: Pod "pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011076193s STEP: Saw pod success Jul 1 12:33:39.240: INFO: Pod "pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652" satisfied condition "success or failure" Jul 1 12:33:39.243: INFO: Trying to get logs from node jerma-worker2 pod pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652 container test-container: STEP: delete the pod Jul 1 12:33:39.285: INFO: Waiting for pod pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652 to disappear Jul 1 12:33:39.345: INFO: Pod pod-624b49c3-d567-4aaa-b4ac-9e10e15aa652 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:33:39.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6381" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:33:39.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2af314db-3cb6-4f04-b311-4d5ad1ba3b7a STEP: Creating a pod to test consume secrets Jul 1 12:33:39.468: INFO: Waiting up to 5m0s for pod "pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243" in namespace "secrets-1576" to be "success or failure" Jul 1 12:33:39.495: INFO: Pod "pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243": Phase="Pending", Reason="", readiness=false. Elapsed: 27.345394ms Jul 1 12:33:41.507: INFO: Pod "pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039151778s Jul 1 12:33:43.519: INFO: Pod "pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051550715s STEP: Saw pod success Jul 1 12:33:43.519: INFO: Pod "pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243" satisfied condition "success or failure" Jul 1 12:33:43.522: INFO: Trying to get logs from node jerma-worker pod pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243 container secret-volume-test: STEP: delete the pod Jul 1 12:33:43.540: INFO: Waiting for pod pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243 to disappear Jul 1 12:33:43.550: INFO: Pod pod-secrets-0a49d647-9af4-4a14-801b-a4a837167243 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:33:43.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1576" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":165,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:33:43.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:33:44.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7838" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":13,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:33:44.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jul 1 12:33:44.191: INFO: >>> kubeConfig: /root/.kube/config Jul 1 12:33:47.159: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:33:56.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3034" for this suite. • [SLOW TEST:12.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":14,"skipped":233,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:33:56.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:34:13.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8215" for this suite. • [SLOW TEST:16.534 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":15,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:34:13.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6672 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jul 1 12:34:13.651: INFO: Found 0 stateful pods, waiting for 3 Jul 1 12:34:23.694: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:34:23.694: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:34:23.694: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 12:34:33.657: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:34:33.657: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:34:33.657: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 1 12:34:33.686: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 1 12:34:43.756: INFO: Updating stateful set ss2 Jul 1 12:34:43.793: INFO: Waiting for Pod statefulset-6672/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jul 1 12:34:54.875: INFO: Found 2 stateful pods, waiting for 3 Jul 1 12:35:05.270: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:35:05.270: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:35:05.270: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 12:35:14.880: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:35:14.880: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:35:14.880: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 1 12:35:14.902: INFO: Updating stateful set ss2 Jul 1 12:35:14.919: INFO: Waiting for Pod statefulset-6672/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 12:35:24.945: INFO: Updating stateful set ss2 Jul 1 12:35:25.250: INFO: Waiting for StatefulSet statefulset-6672/ss2 to complete update Jul 1 12:35:25.250: INFO: Waiting for Pod statefulset-6672/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 12:35:35.257: INFO: Waiting for StatefulSet statefulset-6672/ss2 to complete update Jul 1 12:35:35.257: INFO: Waiting for Pod statefulset-6672/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 1 12:35:45.258: INFO: Deleting all statefulset in ns statefulset-6672 Jul 1 12:35:45.261: INFO: Scaling statefulset ss2 to 0 Jul 1 12:36:15.274: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 12:36:15.276: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:36:15.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6672" for this suite. • [SLOW TEST:121.883 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":16,"skipped":268,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:36:15.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 1 12:36:16.058: INFO: Pod name wrapped-volume-race-0441c216-17b1-4dde-800d-fa1124aa3262: Found 0 pods out of 5 Jul 1 12:36:21.111: INFO: Pod name wrapped-volume-race-0441c216-17b1-4dde-800d-fa1124aa3262: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0441c216-17b1-4dde-800d-fa1124aa3262 in namespace emptydir-wrapper-2916, will wait for the garbage collector to delete the pods Jul 1 12:36:35.242: INFO: Deleting ReplicationController wrapped-volume-race-0441c216-17b1-4dde-800d-fa1124aa3262 took: 6.984385ms Jul 1 12:36:35.343: INFO: Terminating ReplicationController wrapped-volume-race-0441c216-17b1-4dde-800d-fa1124aa3262 pods took: 100.278389ms STEP: Creating RC which spawns configmap-volume pods Jul 1 12:36:50.403: INFO: Pod name wrapped-volume-race-81aa82e9-9f4c-4f15-88ab-e67418525ac5: Found 0 pods out of 5 Jul 1 12:36:55.497: INFO: Pod name wrapped-volume-race-81aa82e9-9f4c-4f15-88ab-e67418525ac5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-81aa82e9-9f4c-4f15-88ab-e67418525ac5 in namespace emptydir-wrapper-2916, will wait for the garbage collector to delete the pods Jul 1 12:37:14.343: INFO: Deleting ReplicationController wrapped-volume-race-81aa82e9-9f4c-4f15-88ab-e67418525ac5 took: 6.546387ms Jul 1 12:37:14.743: INFO: Terminating ReplicationController wrapped-volume-race-81aa82e9-9f4c-4f15-88ab-e67418525ac5 pods took: 400.281897ms STEP: Creating RC which spawns configmap-volume pods Jul 1 12:37:29.472: INFO: Pod name wrapped-volume-race-b9aa0153-5b9b-4890-9bf8-45acdb86166f: Found 0 pods out of 5 Jul 1 12:37:34.478: INFO: Pod name wrapped-volume-race-b9aa0153-5b9b-4890-9bf8-45acdb86166f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b9aa0153-5b9b-4890-9bf8-45acdb86166f in namespace emptydir-wrapper-2916, will wait for the garbage collector to delete the pods Jul 1 12:37:50.562: INFO: Deleting ReplicationController wrapped-volume-race-b9aa0153-5b9b-4890-9bf8-45acdb86166f took: 7.078359ms Jul 1 12:37:50.963: INFO: Terminating ReplicationController wrapped-volume-race-b9aa0153-5b9b-4890-9bf8-45acdb86166f pods took: 400.238244ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:00.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2916" for this suite. • [SLOW TEST:105.645 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":17,"skipped":271,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:00.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a1fa1b85-9cc0-4b67-947d-8a9a7878d2a9 STEP: Creating a pod to test consume secrets Jul 1 12:38:01.623: INFO: Waiting up to 5m0s for pod "pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f" in namespace "secrets-5350" to be "success or failure" Jul 1 12:38:01.769: INFO: Pod "pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f": Phase="Pending", Reason="", readiness=false. Elapsed: 146.644094ms Jul 1 12:38:03.865: INFO: Pod "pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242308008s Jul 1 12:38:05.907: INFO: Pod "pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f": Phase="Running", Reason="", readiness=true. Elapsed: 4.28410849s Jul 1 12:38:07.979: INFO: Pod "pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.356579768s STEP: Saw pod success Jul 1 12:38:07.979: INFO: Pod "pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f" satisfied condition "success or failure" Jul 1 12:38:07.985: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f container secret-env-test: STEP: delete the pod Jul 1 12:38:08.219: INFO: Waiting for pod pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f to disappear Jul 1 12:38:08.223: INFO: Pod pod-secrets-b48e49d6-db9d-4856-8c81-47d72301db3f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:08.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5350" for this suite. • [SLOW TEST:7.306 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":281,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:08.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:19.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4830" for this suite. • [SLOW TEST:11.194 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":19,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:19.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-f1e09dcc-911b-4bcf-a8c0-4b4266a76a76 STEP: Creating secret with name secret-projected-all-test-volume-f8ffb616-fdbb-4558-80f3-40bb19b51217 STEP: Creating a pod to test Check all projections for projected volume plugin Jul 1 12:38:19.958: INFO: Waiting up to 5m0s for pod "projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8" in namespace "projected-6268" to be "success or failure" Jul 1 12:38:20.171: INFO: Pod "projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 213.064729ms Jul 1 12:38:22.175: INFO: Pod "projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216485658s Jul 1 12:38:24.179: INFO: Pod "projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8": Phase="Running", Reason="", readiness=true. Elapsed: 4.220802107s Jul 1 12:38:26.184: INFO: Pod "projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.225584779s STEP: Saw pod success Jul 1 12:38:26.184: INFO: Pod "projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8" satisfied condition "success or failure" Jul 1 12:38:26.187: INFO: Trying to get logs from node jerma-worker pod projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8 container projected-all-volume-test: STEP: delete the pod Jul 1 12:38:26.214: INFO: Waiting for pod projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8 to disappear Jul 1 12:38:26.284: INFO: Pod projected-volume-ad5315b9-7928-4893-bef0-a9a8976e4fe8 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:26.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6268" for this suite. • [SLOW TEST:6.849 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":20,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:26.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-4f36cf3b-d5a1-4ba5-8f18-a85ab2e14dbb STEP: Creating a pod to test consume configMaps Jul 1 12:38:26.375: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55" in namespace "projected-9621" to be "success or failure" Jul 1 12:38:26.381: INFO: Pod "pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162547ms Jul 1 12:38:28.386: INFO: Pod "pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010573948s Jul 1 12:38:30.390: INFO: Pod "pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015342683s STEP: Saw pod success Jul 1 12:38:30.390: INFO: Pod "pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55" satisfied condition "success or failure" Jul 1 12:38:30.393: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55 container projected-configmap-volume-test: STEP: delete the pod Jul 1 12:38:30.591: INFO: Waiting for pod pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55 to disappear Jul 1 12:38:30.638: INFO: Pod pod-projected-configmaps-e6509985-a5b8-4ad1-90a0-68f530d99d55 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:30.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9621" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:30.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 1 12:38:30.721: INFO: Waiting up to 5m0s for pod "pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7" in namespace "emptydir-3636" to be "success or failure" Jul 1 12:38:30.724: INFO: Pod "pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473693ms Jul 1 12:38:32.824: INFO: Pod "pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102432326s Jul 1 12:38:34.901: INFO: Pod "pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179873741s STEP: Saw pod success Jul 1 12:38:34.901: INFO: Pod "pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7" satisfied condition "success or failure" Jul 1 12:38:34.903: INFO: Trying to get logs from node jerma-worker pod pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7 container test-container: STEP: delete the pod Jul 1 12:38:34.941: INFO: Waiting for pod pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7 to disappear Jul 1 12:38:34.948: INFO: Pod pod-9de841a2-b0a8-45db-ab62-43282a7c2bc7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:34.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3636" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":359,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:34.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 1 12:38:35.040: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:49.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2139" for this suite. • [SLOW TEST:14.639 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:49.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 1 12:38:49.698: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:38:57.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5218" for this suite. • [SLOW TEST:8.056 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":24,"skipped":386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:38:57.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9459 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9459 I0701 12:38:58.098769 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9459, replica count: 2 I0701 12:39:01.149346 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 12:39:04.149560 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 12:39:04.149: INFO: Creating new exec pod Jul 1 12:39:09.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodfxtbh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 1 12:39:09.528: INFO: stderr: "I0701 12:39:09.302008 209 log.go:172] (0xc000b9b1e0) (0xc0009a06e0) Create stream\nI0701 12:39:09.302080 209 log.go:172] (0xc000b9b1e0) (0xc0009a06e0) Stream added, broadcasting: 1\nI0701 12:39:09.305970 209 log.go:172] (0xc000b9b1e0) Reply frame received for 1\nI0701 12:39:09.306002 209 log.go:172] (0xc000b9b1e0) (0xc0005e4640) Create stream\nI0701 12:39:09.306010 209 log.go:172] (0xc000b9b1e0) (0xc0005e4640) Stream added, broadcasting: 3\nI0701 12:39:09.310979 209 log.go:172] (0xc000b9b1e0) Reply frame received for 3\nI0701 12:39:09.311007 209 log.go:172] (0xc000b9b1e0) (0xc0007c4be0) Create stream\nI0701 12:39:09.311015 209 log.go:172] (0xc000b9b1e0) (0xc0007c4be0) Stream added, broadcasting: 5\nI0701 12:39:09.311611 209 log.go:172] (0xc000b9b1e0) Reply frame received for 5\nI0701 12:39:09.466152 209 log.go:172] (0xc000b9b1e0) Data frame received for 5\nI0701 12:39:09.466175 209 log.go:172] (0xc0007c4be0) (5) Data frame handling\nI0701 12:39:09.466194 209 log.go:172] (0xc0007c4be0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0701 12:39:09.517500 209 log.go:172] (0xc000b9b1e0) Data frame received for 5\nI0701 12:39:09.517549 209 log.go:172] (0xc0007c4be0) (5) Data frame handling\nI0701 12:39:09.517586 209 log.go:172] (0xc0007c4be0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0701 12:39:09.518092 209 log.go:172] (0xc000b9b1e0) Data frame received for 3\nI0701 12:39:09.518110 209 log.go:172] (0xc0005e4640) (3) Data frame handling\nI0701 12:39:09.518141 209 log.go:172] (0xc000b9b1e0) Data frame received for 5\nI0701 12:39:09.518175 209 log.go:172] (0xc0007c4be0) (5) Data frame handling\nI0701 12:39:09.519792 209 log.go:172] (0xc000b9b1e0) Data frame received for 1\nI0701 12:39:09.519815 209 log.go:172] (0xc0009a06e0) (1) Data frame handling\nI0701 12:39:09.519826 209 log.go:172] (0xc0009a06e0) (1) Data frame sent\nI0701 12:39:09.519840 209 log.go:172] (0xc000b9b1e0) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0701 12:39:09.519850 209 log.go:172] (0xc000b9b1e0) Go away received\nI0701 12:39:09.520277 209 log.go:172] (0xc000b9b1e0) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0701 12:39:09.520294 209 log.go:172] (0xc000b9b1e0) (0xc0005e4640) Stream removed, broadcasting: 3\nI0701 12:39:09.520305 209 log.go:172] (0xc000b9b1e0) (0xc0007c4be0) Stream removed, broadcasting: 5\n" Jul 1 12:39:09.528: INFO: stdout: "" Jul 1 12:39:09.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodfxtbh -- /bin/sh -x -c nc -zv -t -w 2 10.111.85.25 80' Jul 1 12:39:09.727: INFO: stderr: "I0701 12:39:09.643735 228 log.go:172] (0xc000a93550) (0xc00098c8c0) Create stream\nI0701 12:39:09.643816 228 log.go:172] (0xc000a93550) (0xc00098c8c0) Stream added, broadcasting: 1\nI0701 12:39:09.649352 228 log.go:172] (0xc000a93550) Reply frame received for 1\nI0701 12:39:09.649404 228 log.go:172] (0xc000a93550) (0xc0005f6640) Create stream\nI0701 12:39:09.649421 228 log.go:172] (0xc000a93550) (0xc0005f6640) Stream added, broadcasting: 3\nI0701 12:39:09.650372 228 log.go:172] (0xc000a93550) Reply frame received for 3\nI0701 12:39:09.650418 228 log.go:172] (0xc000a93550) (0xc00075f400) Create stream\nI0701 12:39:09.650433 228 log.go:172] (0xc000a93550) (0xc00075f400) Stream added, broadcasting: 5\nI0701 12:39:09.651243 228 log.go:172] (0xc000a93550) Reply frame received for 5\nI0701 12:39:09.714338 228 log.go:172] (0xc000a93550) Data frame received for 5\nI0701 12:39:09.714369 228 log.go:172] (0xc00075f400) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.85.25 80\nConnection to 10.111.85.25 80 port [tcp/http] succeeded!\nI0701 12:39:09.714411 228 log.go:172] (0xc000a93550) Data frame received for 3\nI0701 12:39:09.714457 228 log.go:172] (0xc0005f6640) (3) Data frame handling\nI0701 12:39:09.714488 228 log.go:172] (0xc00075f400) (5) Data frame sent\nI0701 12:39:09.714504 228 log.go:172] (0xc000a93550) Data frame received for 5\nI0701 12:39:09.714516 228 log.go:172] (0xc00075f400) (5) Data frame handling\nI0701 12:39:09.717101 228 log.go:172] (0xc000a93550) Data frame received for 1\nI0701 12:39:09.717312 228 log.go:172] (0xc00098c8c0) (1) Data frame handling\nI0701 12:39:09.717343 228 log.go:172] (0xc00098c8c0) (1) Data frame sent\nI0701 12:39:09.717356 228 log.go:172] (0xc000a93550) (0xc00098c8c0) Stream removed, broadcasting: 1\nI0701 12:39:09.717381 228 log.go:172] (0xc000a93550) Go away received\nI0701 12:39:09.717762 228 log.go:172] (0xc000a93550) (0xc00098c8c0) Stream removed, broadcasting: 1\nI0701 12:39:09.717783 228 log.go:172] (0xc000a93550) (0xc0005f6640) Stream removed, broadcasting: 3\nI0701 12:39:09.717792 228 log.go:172] (0xc000a93550) (0xc00075f400) Stream removed, broadcasting: 5\n" Jul 1 12:39:09.727: INFO: stdout: "" Jul 1 12:39:09.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodfxtbh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31845' Jul 1 12:39:09.967: INFO: stderr: "I0701 12:39:09.896322 247 log.go:172] (0xc0007b80b0) (0xc0007d2140) Create stream\nI0701 12:39:09.896386 247 log.go:172] (0xc0007b80b0) (0xc0007d2140) Stream added, broadcasting: 1\nI0701 12:39:09.898778 247 log.go:172] (0xc0007b80b0) Reply frame received for 1\nI0701 12:39:09.898811 247 log.go:172] (0xc0007b80b0) (0xc000644000) Create stream\nI0701 12:39:09.898825 247 log.go:172] (0xc0007b80b0) (0xc000644000) Stream added, broadcasting: 3\nI0701 12:39:09.899783 247 log.go:172] (0xc0007b80b0) Reply frame received for 3\nI0701 12:39:09.899820 247 log.go:172] (0xc0007b80b0) (0xc0004de000) Create stream\nI0701 12:39:09.899832 247 log.go:172] (0xc0007b80b0) (0xc0004de000) Stream added, broadcasting: 5\nI0701 12:39:09.900613 247 log.go:172] (0xc0007b80b0) Reply frame received for 5\nI0701 12:39:09.957560 247 log.go:172] (0xc0007b80b0) Data frame received for 5\nI0701 12:39:09.957600 247 log.go:172] (0xc0004de000) (5) Data frame handling\nI0701 12:39:09.957619 247 log.go:172] (0xc0004de000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31845\nI0701 12:39:09.957636 247 log.go:172] (0xc0007b80b0) Data frame received for 5\nI0701 12:39:09.957696 247 log.go:172] (0xc0004de000) (5) Data frame handling\nI0701 12:39:09.957741 247 log.go:172] (0xc0004de000) (5) Data frame sent\nConnection to 172.17.0.10 31845 port [tcp/31845] succeeded!\nI0701 12:39:09.958161 247 log.go:172] (0xc0007b80b0) Data frame received for 3\nI0701 12:39:09.958174 247 log.go:172] (0xc000644000) (3) Data frame handling\nI0701 12:39:09.958266 247 log.go:172] (0xc0007b80b0) Data frame received for 5\nI0701 12:39:09.958285 247 log.go:172] (0xc0004de000) (5) Data frame handling\nI0701 12:39:09.959667 247 log.go:172] (0xc0007b80b0) Data frame received for 1\nI0701 12:39:09.959689 247 log.go:172] (0xc0007d2140) (1) Data frame handling\nI0701 12:39:09.959706 247 log.go:172] (0xc0007d2140) (1) Data frame sent\nI0701 12:39:09.959720 247 log.go:172] (0xc0007b80b0) (0xc0007d2140) Stream removed, broadcasting: 1\nI0701 12:39:09.959739 247 log.go:172] (0xc0007b80b0) Go away received\nI0701 12:39:09.960135 247 log.go:172] (0xc0007b80b0) (0xc0007d2140) Stream removed, broadcasting: 1\nI0701 12:39:09.960160 247 log.go:172] (0xc0007b80b0) (0xc000644000) Stream removed, broadcasting: 3\nI0701 12:39:09.960170 247 log.go:172] (0xc0007b80b0) (0xc0004de000) Stream removed, broadcasting: 5\n" Jul 1 12:39:09.967: INFO: stdout: "" Jul 1 12:39:09.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodfxtbh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31845' Jul 1 12:39:10.151: INFO: stderr: "I0701 12:39:10.080767 265 log.go:172] (0xc0001142c0) (0xc0006adc20) Create stream\nI0701 12:39:10.080832 265 log.go:172] (0xc0001142c0) (0xc0006adc20) Stream added, broadcasting: 1\nI0701 12:39:10.083097 265 log.go:172] (0xc0001142c0) Reply frame received for 1\nI0701 12:39:10.083123 265 log.go:172] (0xc0001142c0) (0xc00064e6e0) Create stream\nI0701 12:39:10.083130 265 log.go:172] (0xc0001142c0) (0xc00064e6e0) Stream added, broadcasting: 3\nI0701 12:39:10.083645 265 log.go:172] (0xc0001142c0) Reply frame received for 3\nI0701 12:39:10.083668 265 log.go:172] (0xc0001142c0) (0xc00047b4a0) Create stream\nI0701 12:39:10.083682 265 log.go:172] (0xc0001142c0) (0xc00047b4a0) Stream added, broadcasting: 5\nI0701 12:39:10.084291 265 log.go:172] (0xc0001142c0) Reply frame received for 5\nI0701 12:39:10.143521 265 log.go:172] (0xc0001142c0) Data frame received for 5\nI0701 12:39:10.143538 265 log.go:172] (0xc00047b4a0) (5) Data frame handling\nI0701 12:39:10.143547 265 log.go:172] (0xc00047b4a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 31845\nConnection to 172.17.0.8 31845 port [tcp/31845] succeeded!\nI0701 12:39:10.143807 265 log.go:172] (0xc0001142c0) Data frame received for 5\nI0701 12:39:10.143823 265 log.go:172] (0xc00047b4a0) (5) Data frame handling\nI0701 12:39:10.144027 265 log.go:172] (0xc0001142c0) Data frame received for 3\nI0701 12:39:10.144051 265 log.go:172] (0xc00064e6e0) (3) Data frame handling\nI0701 12:39:10.145611 265 log.go:172] (0xc0001142c0) Data frame received for 1\nI0701 12:39:10.145634 265 log.go:172] (0xc0006adc20) (1) Data frame handling\nI0701 12:39:10.145648 265 log.go:172] (0xc0006adc20) (1) Data frame sent\nI0701 12:39:10.145661 265 log.go:172] (0xc0001142c0) (0xc0006adc20) Stream removed, broadcasting: 1\nI0701 12:39:10.145675 265 log.go:172] (0xc0001142c0) Go away received\nI0701 12:39:10.146029 265 log.go:172] (0xc0001142c0) (0xc0006adc20) Stream removed, broadcasting: 1\nI0701 12:39:10.146050 265 log.go:172] (0xc0001142c0) (0xc00064e6e0) Stream removed, broadcasting: 3\nI0701 12:39:10.146057 265 log.go:172] (0xc0001142c0) (0xc00047b4a0) Stream removed, broadcasting: 5\n" Jul 1 12:39:10.151: INFO: stdout: "" Jul 1 12:39:10.151: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:39:10.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9459" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.604 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":25,"skipped":436,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:39:10.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 1 12:39:10.879: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:10.920: INFO: Number of nodes with available pods: 0 Jul 1 12:39:10.920: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:39:12.196: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:12.199: INFO: Number of nodes with available pods: 0 Jul 1 12:39:12.199: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:39:12.939: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:12.942: INFO: Number of nodes with available pods: 0 Jul 1 12:39:12.942: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:39:14.059: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:14.062: INFO: Number of nodes with available pods: 0 Jul 1 12:39:14.062: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:39:14.925: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:15.118: INFO: Number of nodes with available pods: 0 Jul 1 12:39:15.118: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:39:15.988: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:15.994: INFO: Number of nodes with available pods: 1 Jul 1 12:39:15.994: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:39:16.926: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:16.928: INFO: Number of nodes with available pods: 2 Jul 1 12:39:16.928: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 1 12:39:17.300: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:39:17.312: INFO: Number of nodes with available pods: 2 Jul 1 12:39:17.312: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5513, will wait for the garbage collector to delete the pods Jul 1 12:39:19.393: INFO: Deleting DaemonSet.extensions daemon-set took: 821.134084ms Jul 1 12:39:20.093: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.221244ms Jul 1 12:39:29.597: INFO: Number of nodes with available pods: 0 Jul 1 12:39:29.597: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 12:39:29.603: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5513/daemonsets","resourceVersion":"28774301"},"items":null} Jul 1 12:39:29.607: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5513/pods","resourceVersion":"28774301"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:39:29.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5513" for this suite. • [SLOW TEST:19.389 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":26,"skipped":450,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:39:29.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:39:29.720: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79" in namespace "projected-9647" to be "success or failure" Jul 1 12:39:29.730: INFO: Pod "downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79": Phase="Pending", Reason="", readiness=false. Elapsed: 10.551137ms Jul 1 12:39:31.735: INFO: Pod "downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01505404s Jul 1 12:39:33.739: INFO: Pod "downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79": Phase="Running", Reason="", readiness=true. Elapsed: 4.019654434s Jul 1 12:39:35.744: INFO: Pod "downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024098056s STEP: Saw pod success Jul 1 12:39:35.744: INFO: Pod "downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79" satisfied condition "success or failure" Jul 1 12:39:35.747: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79 container client-container: STEP: delete the pod Jul 1 12:39:35.773: INFO: Waiting for pod downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79 to disappear Jul 1 12:39:35.790: INFO: Pod downwardapi-volume-d446a875-24c2-46e2-80d4-97d1c49abd79 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:39:35.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9647" for this suite. • [SLOW TEST:6.154 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":462,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:39:35.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2469.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.195_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2469.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.195_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 12:39:50.092: INFO: Unable to read wheezy_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.095: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.097: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.100: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.215: INFO: Unable to read jessie_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.217: INFO: Unable to read jessie_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.220: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:50.239: INFO: Lookups using dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19 failed for: [wheezy_udp@dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_udp@dns-test-service.dns-2469.svc.cluster.local jessie_tcp@dns-test-service.dns-2469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local] Jul 1 12:39:55.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.249: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.252: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.255: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.273: INFO: Unable to read jessie_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.279: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.282: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:39:55.300: INFO: Lookups using dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19 failed for: [wheezy_udp@dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_udp@dns-test-service.dns-2469.svc.cluster.local jessie_tcp@dns-test-service.dns-2469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local] Jul 1 12:40:00.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.249: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.254: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.256: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.278: INFO: Unable to read jessie_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.280: INFO: Unable to read jessie_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.283: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.286: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:00.303: INFO: Lookups using dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19 failed for: [wheezy_udp@dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_udp@dns-test-service.dns-2469.svc.cluster.local jessie_tcp@dns-test-service.dns-2469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local] Jul 1 12:40:05.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.249: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.252: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.255: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.275: INFO: Unable to read jessie_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.281: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.283: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:05.301: INFO: Lookups using dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19 failed for: [wheezy_udp@dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_udp@dns-test-service.dns-2469.svc.cluster.local jessie_tcp@dns-test-service.dns-2469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local] Jul 1 12:40:10.401: INFO: Unable to read wheezy_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.829: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.835: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.851: INFO: Unable to read jessie_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.853: INFO: Unable to read jessie_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.856: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:10.875: INFO: Lookups using dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19 failed for: [wheezy_udp@dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_udp@dns-test-service.dns-2469.svc.cluster.local jessie_tcp@dns-test-service.dns-2469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local] Jul 1 12:40:15.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.249: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.251: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.255: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.277: INFO: Unable to read jessie_udp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.280: INFO: Unable to read jessie_tcp@dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local from pod dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19: the server could not find the requested resource (get pods dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19) Jul 1 12:40:15.298: INFO: Lookups using dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19 failed for: [wheezy_udp@dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@dns-test-service.dns-2469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_udp@dns-test-service.dns-2469.svc.cluster.local jessie_tcp@dns-test-service.dns-2469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2469.svc.cluster.local] Jul 1 12:40:20.300: INFO: DNS probes using dns-2469/dns-test-59197743-1e9c-44b1-a3d0-98b024bb1a19 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:40:21.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2469" for this suite. • [SLOW TEST:45.511 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":28,"skipped":477,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:40:21.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1397.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1397.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 12:40:29.474: INFO: DNS probes using dns-1397/dns-test-e8b12c90-fe36-45c5-b3ab-332b6e425e89 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:40:29.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1397" for this suite. • [SLOW TEST:8.260 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":29,"skipped":493,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:40:29.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 12:40:33.960: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:40:34.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7091" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":499,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:40:34.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 1 12:40:34.194: INFO: Waiting up to 5m0s for pod "pod-ef99688e-1f95-46d8-be58-e226d4898809" in namespace "emptydir-5414" to be "success or failure" Jul 1 12:40:34.220: INFO: Pod "pod-ef99688e-1f95-46d8-be58-e226d4898809": Phase="Pending", Reason="", readiness=false. Elapsed: 26.657743ms Jul 1 12:40:36.304: INFO: Pod "pod-ef99688e-1f95-46d8-be58-e226d4898809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110568362s Jul 1 12:40:38.308: INFO: Pod "pod-ef99688e-1f95-46d8-be58-e226d4898809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114253027s STEP: Saw pod success Jul 1 12:40:38.308: INFO: Pod "pod-ef99688e-1f95-46d8-be58-e226d4898809" satisfied condition "success or failure" Jul 1 12:40:38.310: INFO: Trying to get logs from node jerma-worker pod pod-ef99688e-1f95-46d8-be58-e226d4898809 container test-container: STEP: delete the pod Jul 1 12:40:38.342: INFO: Waiting for pod pod-ef99688e-1f95-46d8-be58-e226d4898809 to disappear Jul 1 12:40:38.354: INFO: Pod pod-ef99688e-1f95-46d8-be58-e226d4898809 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:40:38.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5414" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":500,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:40:38.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6154 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 12:40:38.460: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 12:41:06.660: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.190 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6154 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:06.660: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:06.697266 6 log.go:172] (0xc0027b24d0) (0xc000f3db80) Create stream I0701 12:41:06.697317 6 log.go:172] (0xc0027b24d0) (0xc000f3db80) Stream added, broadcasting: 1 I0701 12:41:06.699208 6 log.go:172] (0xc0027b24d0) Reply frame received for 1 I0701 12:41:06.699250 6 log.go:172] (0xc0027b24d0) (0xc000f3dc20) Create stream I0701 12:41:06.699266 6 log.go:172] (0xc0027b24d0) (0xc000f3dc20) Stream added, broadcasting: 3 I0701 12:41:06.700204 6 log.go:172] (0xc0027b24d0) Reply frame received for 3 I0701 12:41:06.700233 6 log.go:172] (0xc0027b24d0) (0xc000bdcf00) Create stream I0701 12:41:06.700245 6 log.go:172] (0xc0027b24d0) (0xc000bdcf00) Stream added, broadcasting: 5 I0701 12:41:06.701376 6 log.go:172] (0xc0027b24d0) Reply frame received for 5 I0701 12:41:07.825406 6 log.go:172] (0xc0027b24d0) Data frame received for 3 I0701 12:41:07.825518 6 log.go:172] (0xc000f3dc20) (3) Data frame handling I0701 12:41:07.825609 6 log.go:172] (0xc000f3dc20) (3) Data frame sent I0701 12:41:07.825892 6 log.go:172] (0xc0027b24d0) Data frame received for 3 I0701 12:41:07.825930 6 log.go:172] (0xc000f3dc20) (3) Data frame handling I0701 12:41:07.825978 6 log.go:172] (0xc0027b24d0) Data frame received for 5 I0701 12:41:07.826029 6 log.go:172] (0xc000bdcf00) (5) Data frame handling I0701 12:41:07.828186 6 log.go:172] (0xc0027b24d0) Data frame received for 1 I0701 12:41:07.828220 6 log.go:172] (0xc000f3db80) (1) Data frame handling I0701 12:41:07.828234 6 log.go:172] (0xc000f3db80) (1) Data frame sent I0701 12:41:07.828250 6 log.go:172] (0xc0027b24d0) (0xc000f3db80) Stream removed, broadcasting: 1 I0701 12:41:07.828277 6 log.go:172] (0xc0027b24d0) Go away received I0701 12:41:07.828713 6 log.go:172] (0xc0027b24d0) (0xc000f3db80) Stream removed, broadcasting: 1 I0701 12:41:07.828757 6 log.go:172] (0xc0027b24d0) (0xc000f3dc20) Stream removed, broadcasting: 3 I0701 12:41:07.828789 6 log.go:172] (0xc0027b24d0) (0xc000bdcf00) Stream removed, broadcasting: 5 Jul 1 12:41:07.828: INFO: Found all expected endpoints: [netserver-0] Jul 1 12:41:07.832: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.218 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6154 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:07.832: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:07.889935 6 log.go:172] (0xc0027b2a50) (0xc001d78000) Create stream I0701 12:41:07.889976 6 log.go:172] (0xc0027b2a50) (0xc001d78000) Stream added, broadcasting: 1 I0701 12:41:07.891964 6 log.go:172] (0xc0027b2a50) Reply frame received for 1 I0701 12:41:07.892013 6 log.go:172] (0xc0027b2a50) (0xc001d780a0) Create stream I0701 12:41:07.892031 6 log.go:172] (0xc0027b2a50) (0xc001d780a0) Stream added, broadcasting: 3 I0701 12:41:07.893399 6 log.go:172] (0xc0027b2a50) Reply frame received for 3 I0701 12:41:07.893431 6 log.go:172] (0xc0027b2a50) (0xc00196a460) Create stream I0701 12:41:07.893443 6 log.go:172] (0xc0027b2a50) (0xc00196a460) Stream added, broadcasting: 5 I0701 12:41:07.894165 6 log.go:172] (0xc0027b2a50) Reply frame received for 5 I0701 12:41:08.980547 6 log.go:172] (0xc0027b2a50) Data frame received for 3 I0701 12:41:08.980598 6 log.go:172] (0xc001d780a0) (3) Data frame handling I0701 12:41:08.980633 6 log.go:172] (0xc001d780a0) (3) Data frame sent I0701 12:41:08.980648 6 log.go:172] (0xc0027b2a50) Data frame received for 3 I0701 12:41:08.980793 6 log.go:172] (0xc001d780a0) (3) Data frame handling I0701 12:41:08.980856 6 log.go:172] (0xc0027b2a50) Data frame received for 5 I0701 12:41:08.980942 6 log.go:172] (0xc00196a460) (5) Data frame handling I0701 12:41:08.983180 6 log.go:172] (0xc0027b2a50) Data frame received for 1 I0701 12:41:08.983204 6 log.go:172] (0xc001d78000) (1) Data frame handling I0701 12:41:08.983222 6 log.go:172] (0xc001d78000) (1) Data frame sent I0701 12:41:08.983248 6 log.go:172] (0xc0027b2a50) (0xc001d78000) Stream removed, broadcasting: 1 I0701 12:41:08.983274 6 log.go:172] (0xc0027b2a50) Go away received I0701 12:41:08.983648 6 log.go:172] (0xc0027b2a50) (0xc001d78000) Stream removed, broadcasting: 1 I0701 12:41:08.983689 6 log.go:172] (0xc0027b2a50) (0xc001d780a0) Stream removed, broadcasting: 3 I0701 12:41:08.983716 6 log.go:172] (0xc0027b2a50) (0xc00196a460) Stream removed, broadcasting: 5 Jul 1 12:41:08.983: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:41:08.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6154" for this suite. • [SLOW TEST:30.632 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":509,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:41:08.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:41:13.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4064" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":33,"skipped":521,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:41:13.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jul 1 12:41:19.870: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jul 1 12:41:29.986: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:41:29.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1888" for this suite. • [SLOW TEST:16.219 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":34,"skipped":523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:41:30.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jul 1 12:41:30.111: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:41:30.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4031" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":35,"skipped":558,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:41:30.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 1 12:41:30.267: INFO: Waiting up to 5m0s for pod "pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312" in namespace "emptydir-8999" to be "success or failure" Jul 1 12:41:30.291: INFO: Pod "pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312": Phase="Pending", Reason="", readiness=false. Elapsed: 23.942388ms Jul 1 12:41:32.530: INFO: Pod "pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263785885s Jul 1 12:41:36.098: INFO: Pod "pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312": Phase="Pending", Reason="", readiness=false. Elapsed: 5.830947567s Jul 1 12:41:38.102: INFO: Pod "pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312": Phase="Pending", Reason="", readiness=false. Elapsed: 7.835107318s Jul 1 12:41:40.107: INFO: Pod "pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.839980655s STEP: Saw pod success Jul 1 12:41:40.107: INFO: Pod "pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312" satisfied condition "success or failure" Jul 1 12:41:40.110: INFO: Trying to get logs from node jerma-worker2 pod pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312 container test-container: STEP: delete the pod Jul 1 12:41:40.170: INFO: Waiting for pod pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312 to disappear Jul 1 12:41:40.176: INFO: Pod pod-f1bd895e-dbd4-4761-8db1-fef5fdce1312 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:41:40.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8999" for this suite. • [SLOW TEST:9.981 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":560,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:41:40.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:41:40.256: INFO: Waiting up to 5m0s for pod "busybox-user-65534-00b548d9-718c-4d8c-bacf-e3e3d3e22a98" in namespace "security-context-test-8300" to be "success or failure" Jul 1 12:41:40.260: INFO: Pod "busybox-user-65534-00b548d9-718c-4d8c-bacf-e3e3d3e22a98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.56579ms Jul 1 12:41:42.264: INFO: Pod "busybox-user-65534-00b548d9-718c-4d8c-bacf-e3e3d3e22a98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007915157s Jul 1 12:41:44.311: INFO: Pod "busybox-user-65534-00b548d9-718c-4d8c-bacf-e3e3d3e22a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055063382s Jul 1 12:41:44.311: INFO: Pod "busybox-user-65534-00b548d9-718c-4d8c-bacf-e3e3d3e22a98" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:41:44.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8300" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":569,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:41:44.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 1 12:41:56.423: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:56.424: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:56.459664 6 log.go:172] (0xc00351a000) (0xc000fb08c0) Create stream I0701 12:41:56.459693 6 log.go:172] (0xc00351a000) (0xc000fb08c0) Stream added, broadcasting: 1 I0701 12:41:56.467191 6 log.go:172] (0xc00351a000) Reply frame received for 1 I0701 12:41:56.467289 6 log.go:172] (0xc00351a000) (0xc001158000) Create stream I0701 12:41:56.467342 6 log.go:172] (0xc00351a000) (0xc001158000) Stream added, broadcasting: 3 I0701 12:41:56.468370 6 log.go:172] (0xc00351a000) Reply frame received for 3 I0701 12:41:56.468425 6 log.go:172] (0xc00351a000) (0xc002037400) Create stream I0701 12:41:56.468462 6 log.go:172] (0xc00351a000) (0xc002037400) Stream added, broadcasting: 5 I0701 12:41:56.469797 6 log.go:172] (0xc00351a000) Reply frame received for 5 I0701 12:41:56.541470 6 log.go:172] (0xc00351a000) Data frame received for 5 I0701 12:41:56.541517 6 log.go:172] (0xc002037400) (5) Data frame handling I0701 12:41:56.541548 6 log.go:172] (0xc00351a000) Data frame received for 3 I0701 12:41:56.541563 6 log.go:172] (0xc001158000) (3) Data frame handling I0701 12:41:56.541580 6 log.go:172] (0xc001158000) (3) Data frame sent I0701 12:41:56.541594 6 log.go:172] (0xc00351a000) Data frame received for 3 I0701 12:41:56.541607 6 log.go:172] (0xc001158000) (3) Data frame handling I0701 12:41:56.542966 6 log.go:172] (0xc00351a000) Data frame received for 1 I0701 12:41:56.543010 6 log.go:172] (0xc000fb08c0) (1) Data frame handling I0701 12:41:56.543043 6 log.go:172] (0xc000fb08c0) (1) Data frame sent I0701 12:41:56.543092 6 log.go:172] (0xc00351a000) (0xc000fb08c0) Stream removed, broadcasting: 1 I0701 12:41:56.543131 6 log.go:172] (0xc00351a000) Go away received I0701 12:41:56.543381 6 log.go:172] (0xc00351a000) (0xc000fb08c0) Stream removed, broadcasting: 1 I0701 12:41:56.543418 6 log.go:172] (0xc00351a000) (0xc001158000) Stream removed, broadcasting: 3 I0701 12:41:56.543450 6 log.go:172] (0xc00351a000) (0xc002037400) Stream removed, broadcasting: 5 Jul 1 12:41:56.543: INFO: Exec stderr: "" Jul 1 12:41:56.543: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:56.543: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:56.579136 6 log.go:172] (0xc0020ab6b0) (0xc000d2c6e0) Create stream I0701 12:41:56.579178 6 log.go:172] (0xc0020ab6b0) (0xc000d2c6e0) Stream added, broadcasting: 1 I0701 12:41:56.583230 6 log.go:172] (0xc0020ab6b0) Reply frame received for 1 I0701 12:41:56.583271 6 log.go:172] (0xc0020ab6b0) (0xc001c82000) Create stream I0701 12:41:56.583286 6 log.go:172] (0xc0020ab6b0) (0xc001c82000) Stream added, broadcasting: 3 I0701 12:41:56.584545 6 log.go:172] (0xc0020ab6b0) Reply frame received for 3 I0701 12:41:56.584614 6 log.go:172] (0xc0020ab6b0) (0xc00196a000) Create stream I0701 12:41:56.584632 6 log.go:172] (0xc0020ab6b0) (0xc00196a000) Stream added, broadcasting: 5 I0701 12:41:56.586144 6 log.go:172] (0xc0020ab6b0) Reply frame received for 5 I0701 12:41:56.652078 6 log.go:172] (0xc0020ab6b0) Data frame received for 5 I0701 12:41:56.652129 6 log.go:172] (0xc00196a000) (5) Data frame handling I0701 12:41:56.652163 6 log.go:172] (0xc0020ab6b0) Data frame received for 3 I0701 12:41:56.652205 6 log.go:172] (0xc001c82000) (3) Data frame handling I0701 12:41:56.652234 6 log.go:172] (0xc001c82000) (3) Data frame sent I0701 12:41:56.652256 6 log.go:172] (0xc0020ab6b0) Data frame received for 3 I0701 12:41:56.652273 6 log.go:172] (0xc001c82000) (3) Data frame handling I0701 12:41:56.653702 6 log.go:172] (0xc0020ab6b0) Data frame received for 1 I0701 12:41:56.653725 6 log.go:172] (0xc000d2c6e0) (1) Data frame handling I0701 12:41:56.653746 6 log.go:172] (0xc000d2c6e0) (1) Data frame sent I0701 12:41:56.653850 6 log.go:172] (0xc0020ab6b0) (0xc000d2c6e0) Stream removed, broadcasting: 1 I0701 12:41:56.653866 6 log.go:172] (0xc0020ab6b0) Go away received I0701 12:41:56.654083 6 log.go:172] (0xc0020ab6b0) (0xc000d2c6e0) Stream removed, broadcasting: 1 I0701 12:41:56.654119 6 log.go:172] (0xc0020ab6b0) (0xc001c82000) Stream removed, broadcasting: 3 I0701 12:41:56.654151 6 log.go:172] (0xc0020ab6b0) (0xc00196a000) Stream removed, broadcasting: 5 Jul 1 12:41:56.654: INFO: Exec stderr: "" Jul 1 12:41:56.654: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:56.654: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:56.682311 6 log.go:172] (0xc0031a0210) (0xc000bdc460) Create stream I0701 12:41:56.682345 6 log.go:172] (0xc0031a0210) (0xc000bdc460) Stream added, broadcasting: 1 I0701 12:41:56.685343 6 log.go:172] (0xc0031a0210) Reply frame received for 1 I0701 12:41:56.685392 6 log.go:172] (0xc0031a0210) (0xc000d2ca00) Create stream I0701 12:41:56.685407 6 log.go:172] (0xc0031a0210) (0xc000d2ca00) Stream added, broadcasting: 3 I0701 12:41:56.686408 6 log.go:172] (0xc0031a0210) Reply frame received for 3 I0701 12:41:56.686461 6 log.go:172] (0xc0031a0210) (0xc001c82280) Create stream I0701 12:41:56.686493 6 log.go:172] (0xc0031a0210) (0xc001c82280) Stream added, broadcasting: 5 I0701 12:41:56.687454 6 log.go:172] (0xc0031a0210) Reply frame received for 5 I0701 12:41:56.739211 6 log.go:172] (0xc0031a0210) Data frame received for 5 I0701 12:41:56.739256 6 log.go:172] (0xc0031a0210) Data frame received for 3 I0701 12:41:56.739304 6 log.go:172] (0xc000d2ca00) (3) Data frame handling I0701 12:41:56.739322 6 log.go:172] (0xc000d2ca00) (3) Data frame sent I0701 12:41:56.739334 6 log.go:172] (0xc0031a0210) Data frame received for 3 I0701 12:41:56.739341 6 log.go:172] (0xc000d2ca00) (3) Data frame handling I0701 12:41:56.739367 6 log.go:172] (0xc001c82280) (5) Data frame handling I0701 12:41:56.740830 6 log.go:172] (0xc0031a0210) Data frame received for 1 I0701 12:41:56.740852 6 log.go:172] (0xc000bdc460) (1) Data frame handling I0701 12:41:56.740877 6 log.go:172] (0xc000bdc460) (1) Data frame sent I0701 12:41:56.740893 6 log.go:172] (0xc0031a0210) (0xc000bdc460) Stream removed, broadcasting: 1 I0701 12:41:56.740927 6 log.go:172] (0xc0031a0210) Go away received I0701 12:41:56.741082 6 log.go:172] (0xc0031a0210) (0xc000bdc460) Stream removed, broadcasting: 1 I0701 12:41:56.741096 6 log.go:172] (0xc0031a0210) (0xc000d2ca00) Stream removed, broadcasting: 3 I0701 12:41:56.741104 6 log.go:172] (0xc0031a0210) (0xc001c82280) Stream removed, broadcasting: 5 Jul 1 12:41:56.741: INFO: Exec stderr: "" Jul 1 12:41:56.741: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:56.741: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:56.767446 6 log.go:172] (0xc0033184d0) (0xc000f3c8c0) Create stream I0701 12:41:56.767469 6 log.go:172] (0xc0033184d0) (0xc000f3c8c0) Stream added, broadcasting: 1 I0701 12:41:56.769717 6 log.go:172] (0xc0033184d0) Reply frame received for 1 I0701 12:41:56.769757 6 log.go:172] (0xc0033184d0) (0xc001c823c0) Create stream I0701 12:41:56.769770 6 log.go:172] (0xc0033184d0) (0xc001c823c0) Stream added, broadcasting: 3 I0701 12:41:56.770538 6 log.go:172] (0xc0033184d0) Reply frame received for 3 I0701 12:41:56.770583 6 log.go:172] (0xc0033184d0) (0xc001c82460) Create stream I0701 12:41:56.770593 6 log.go:172] (0xc0033184d0) (0xc001c82460) Stream added, broadcasting: 5 I0701 12:41:56.771381 6 log.go:172] (0xc0033184d0) Reply frame received for 5 I0701 12:41:56.831469 6 log.go:172] (0xc0033184d0) Data frame received for 5 I0701 12:41:56.831497 6 log.go:172] (0xc001c82460) (5) Data frame handling I0701 12:41:56.831513 6 log.go:172] (0xc0033184d0) Data frame received for 3 I0701 12:41:56.831525 6 log.go:172] (0xc001c823c0) (3) Data frame handling I0701 12:41:56.831532 6 log.go:172] (0xc001c823c0) (3) Data frame sent I0701 12:41:56.831543 6 log.go:172] (0xc0033184d0) Data frame received for 3 I0701 12:41:56.831548 6 log.go:172] (0xc001c823c0) (3) Data frame handling I0701 12:41:56.832728 6 log.go:172] (0xc0033184d0) Data frame received for 1 I0701 12:41:56.832757 6 log.go:172] (0xc000f3c8c0) (1) Data frame handling I0701 12:41:56.832815 6 log.go:172] (0xc000f3c8c0) (1) Data frame sent I0701 12:41:56.832837 6 log.go:172] (0xc0033184d0) (0xc000f3c8c0) Stream removed, broadcasting: 1 I0701 12:41:56.832857 6 log.go:172] (0xc0033184d0) Go away received I0701 12:41:56.833045 6 log.go:172] (0xc0033184d0) (0xc000f3c8c0) Stream removed, broadcasting: 1 I0701 12:41:56.833077 6 log.go:172] (0xc0033184d0) (0xc001c823c0) Stream removed, broadcasting: 3 I0701 12:41:56.833092 6 log.go:172] (0xc0033184d0) (0xc001c82460) Stream removed, broadcasting: 5 Jul 1 12:41:56.833: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 1 12:41:56.833: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:56.833: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:56.864697 6 log.go:172] (0xc0031a0840) (0xc000bdcc80) Create stream I0701 12:41:56.864732 6 log.go:172] (0xc0031a0840) (0xc000bdcc80) Stream added, broadcasting: 1 I0701 12:41:56.868350 6 log.go:172] (0xc0031a0840) Reply frame received for 1 I0701 12:41:56.868414 6 log.go:172] (0xc0031a0840) (0xc00196a0a0) Create stream I0701 12:41:56.868433 6 log.go:172] (0xc0031a0840) (0xc00196a0a0) Stream added, broadcasting: 3 I0701 12:41:56.869365 6 log.go:172] (0xc0031a0840) Reply frame received for 3 I0701 12:41:56.869467 6 log.go:172] (0xc0031a0840) (0xc001c82500) Create stream I0701 12:41:56.869506 6 log.go:172] (0xc0031a0840) (0xc001c82500) Stream added, broadcasting: 5 I0701 12:41:56.870450 6 log.go:172] (0xc0031a0840) Reply frame received for 5 I0701 12:41:56.927174 6 log.go:172] (0xc0031a0840) Data frame received for 3 I0701 12:41:56.927386 6 log.go:172] (0xc00196a0a0) (3) Data frame handling I0701 12:41:56.927399 6 log.go:172] (0xc00196a0a0) (3) Data frame sent I0701 12:41:56.927405 6 log.go:172] (0xc0031a0840) Data frame received for 3 I0701 12:41:56.927409 6 log.go:172] (0xc00196a0a0) (3) Data frame handling I0701 12:41:56.927436 6 log.go:172] (0xc0031a0840) Data frame received for 5 I0701 12:41:56.927444 6 log.go:172] (0xc001c82500) (5) Data frame handling I0701 12:41:56.929971 6 log.go:172] (0xc0031a0840) Data frame received for 1 I0701 12:41:56.929985 6 log.go:172] (0xc000bdcc80) (1) Data frame handling I0701 12:41:56.929995 6 log.go:172] (0xc000bdcc80) (1) Data frame sent I0701 12:41:56.930012 6 log.go:172] (0xc0031a0840) (0xc000bdcc80) Stream removed, broadcasting: 1 I0701 12:41:56.930154 6 log.go:172] (0xc0031a0840) Go away received I0701 12:41:56.930236 6 log.go:172] (0xc0031a0840) (0xc000bdcc80) Stream removed, broadcasting: 1 I0701 12:41:56.930325 6 log.go:172] (0xc0031a0840) (0xc00196a0a0) Stream removed, broadcasting: 3 I0701 12:41:56.930340 6 log.go:172] (0xc0031a0840) (0xc001c82500) Stream removed, broadcasting: 5 Jul 1 12:41:56.930: INFO: Exec stderr: "" Jul 1 12:41:56.930: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:56.930: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:56.967744 6 log.go:172] (0xc0030c42c0) (0xc000d2cdc0) Create stream I0701 12:41:56.967770 6 log.go:172] (0xc0030c42c0) (0xc000d2cdc0) Stream added, broadcasting: 1 I0701 12:41:56.970767 6 log.go:172] (0xc0030c42c0) Reply frame received for 1 I0701 12:41:56.970813 6 log.go:172] (0xc0030c42c0) (0xc001c825a0) Create stream I0701 12:41:56.970833 6 log.go:172] (0xc0030c42c0) (0xc001c825a0) Stream added, broadcasting: 3 I0701 12:41:56.972020 6 log.go:172] (0xc0030c42c0) Reply frame received for 3 I0701 12:41:56.972065 6 log.go:172] (0xc0030c42c0) (0xc000bdcdc0) Create stream I0701 12:41:56.972082 6 log.go:172] (0xc0030c42c0) (0xc000bdcdc0) Stream added, broadcasting: 5 I0701 12:41:56.973345 6 log.go:172] (0xc0030c42c0) Reply frame received for 5 I0701 12:41:57.026960 6 log.go:172] (0xc0030c42c0) Data frame received for 5 I0701 12:41:57.026998 6 log.go:172] (0xc0030c42c0) Data frame received for 3 I0701 12:41:57.027031 6 log.go:172] (0xc001c825a0) (3) Data frame handling I0701 12:41:57.027059 6 log.go:172] (0xc001c825a0) (3) Data frame sent I0701 12:41:57.027074 6 log.go:172] (0xc0030c42c0) Data frame received for 3 I0701 12:41:57.027096 6 log.go:172] (0xc000bdcdc0) (5) Data frame handling I0701 12:41:57.027160 6 log.go:172] (0xc001c825a0) (3) Data frame handling I0701 12:41:57.028553 6 log.go:172] (0xc0030c42c0) Data frame received for 1 I0701 12:41:57.028579 6 log.go:172] (0xc000d2cdc0) (1) Data frame handling I0701 12:41:57.028634 6 log.go:172] (0xc000d2cdc0) (1) Data frame sent I0701 12:41:57.028776 6 log.go:172] (0xc0030c42c0) (0xc000d2cdc0) Stream removed, broadcasting: 1 I0701 12:41:57.028836 6 log.go:172] (0xc0030c42c0) Go away received I0701 12:41:57.028925 6 log.go:172] (0xc0030c42c0) (0xc000d2cdc0) Stream removed, broadcasting: 1 I0701 12:41:57.028946 6 log.go:172] (0xc0030c42c0) (0xc001c825a0) Stream removed, broadcasting: 3 I0701 12:41:57.028959 6 log.go:172] (0xc0030c42c0) (0xc000bdcdc0) Stream removed, broadcasting: 5 Jul 1 12:41:57.028: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 1 12:41:57.029: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:57.029: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:57.060124 6 log.go:172] (0xc0030c48f0) (0xc000d2d2c0) Create stream I0701 12:41:57.060163 6 log.go:172] (0xc0030c48f0) (0xc000d2d2c0) Stream added, broadcasting: 1 I0701 12:41:57.062706 6 log.go:172] (0xc0030c48f0) Reply frame received for 1 I0701 12:41:57.062739 6 log.go:172] (0xc0030c48f0) (0xc00196a460) Create stream I0701 12:41:57.062752 6 log.go:172] (0xc0030c48f0) (0xc00196a460) Stream added, broadcasting: 3 I0701 12:41:57.063798 6 log.go:172] (0xc0030c48f0) Reply frame received for 3 I0701 12:41:57.063835 6 log.go:172] (0xc0030c48f0) (0xc000bdcf00) Create stream I0701 12:41:57.063850 6 log.go:172] (0xc0030c48f0) (0xc000bdcf00) Stream added, broadcasting: 5 I0701 12:41:57.064775 6 log.go:172] (0xc0030c48f0) Reply frame received for 5 I0701 12:41:57.112463 6 log.go:172] (0xc0030c48f0) Data frame received for 5 I0701 12:41:57.112490 6 log.go:172] (0xc000bdcf00) (5) Data frame handling I0701 12:41:57.112520 6 log.go:172] (0xc0030c48f0) Data frame received for 3 I0701 12:41:57.112560 6 log.go:172] (0xc00196a460) (3) Data frame handling I0701 12:41:57.112586 6 log.go:172] (0xc00196a460) (3) Data frame sent I0701 12:41:57.112600 6 log.go:172] (0xc0030c48f0) Data frame received for 3 I0701 12:41:57.112611 6 log.go:172] (0xc00196a460) (3) Data frame handling I0701 12:41:57.114146 6 log.go:172] (0xc0030c48f0) Data frame received for 1 I0701 12:41:57.114206 6 log.go:172] (0xc000d2d2c0) (1) Data frame handling I0701 12:41:57.114256 6 log.go:172] (0xc000d2d2c0) (1) Data frame sent I0701 12:41:57.114278 6 log.go:172] (0xc0030c48f0) (0xc000d2d2c0) Stream removed, broadcasting: 1 I0701 12:41:57.114299 6 log.go:172] (0xc0030c48f0) Go away received I0701 12:41:57.114497 6 log.go:172] (0xc0030c48f0) (0xc000d2d2c0) Stream removed, broadcasting: 1 I0701 12:41:57.114526 6 log.go:172] (0xc0030c48f0) (0xc00196a460) Stream removed, broadcasting: 3 I0701 12:41:57.114545 6 log.go:172] (0xc0030c48f0) (0xc000bdcf00) Stream removed, broadcasting: 5 Jul 1 12:41:57.114: INFO: Exec stderr: "" Jul 1 12:41:57.114: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:57.114: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:57.168906 6 log.go:172] (0xc0027b2580) (0xc001c82aa0) Create stream I0701 12:41:57.168942 6 log.go:172] (0xc0027b2580) (0xc001c82aa0) Stream added, broadcasting: 1 I0701 12:41:57.171876 6 log.go:172] (0xc0027b2580) Reply frame received for 1 I0701 12:41:57.171924 6 log.go:172] (0xc0027b2580) (0xc001c82b40) Create stream I0701 12:41:57.171940 6 log.go:172] (0xc0027b2580) (0xc001c82b40) Stream added, broadcasting: 3 I0701 12:41:57.172929 6 log.go:172] (0xc0027b2580) Reply frame received for 3 I0701 12:41:57.172979 6 log.go:172] (0xc0027b2580) (0xc000bdd040) Create stream I0701 12:41:57.172995 6 log.go:172] (0xc0027b2580) (0xc000bdd040) Stream added, broadcasting: 5 I0701 12:41:57.174502 6 log.go:172] (0xc0027b2580) Reply frame received for 5 I0701 12:41:57.228433 6 log.go:172] (0xc0027b2580) Data frame received for 5 I0701 12:41:57.228469 6 log.go:172] (0xc0027b2580) Data frame received for 3 I0701 12:41:57.228497 6 log.go:172] (0xc001c82b40) (3) Data frame handling I0701 12:41:57.228519 6 log.go:172] (0xc000bdd040) (5) Data frame handling I0701 12:41:57.228543 6 log.go:172] (0xc001c82b40) (3) Data frame sent I0701 12:41:57.228563 6 log.go:172] (0xc0027b2580) Data frame received for 3 I0701 12:41:57.228634 6 log.go:172] (0xc001c82b40) (3) Data frame handling I0701 12:41:57.229787 6 log.go:172] (0xc0027b2580) Data frame received for 1 I0701 12:41:57.229835 6 log.go:172] (0xc001c82aa0) (1) Data frame handling I0701 12:41:57.229849 6 log.go:172] (0xc001c82aa0) (1) Data frame sent I0701 12:41:57.229859 6 log.go:172] (0xc0027b2580) (0xc001c82aa0) Stream removed, broadcasting: 1 I0701 12:41:57.229871 6 log.go:172] (0xc0027b2580) Go away received I0701 12:41:57.229946 6 log.go:172] (0xc0027b2580) (0xc001c82aa0) Stream removed, broadcasting: 1 I0701 12:41:57.229971 6 log.go:172] (0xc0027b2580) (0xc001c82b40) Stream removed, broadcasting: 3 I0701 12:41:57.229984 6 log.go:172] (0xc0027b2580) (0xc000bdd040) Stream removed, broadcasting: 5 Jul 1 12:41:57.229: INFO: Exec stderr: "" Jul 1 12:41:57.230: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:57.230: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:57.270911 6 log.go:172] (0xc0027b2bb0) (0xc001c82dc0) Create stream I0701 12:41:57.270942 6 log.go:172] (0xc0027b2bb0) (0xc001c82dc0) Stream added, broadcasting: 1 I0701 12:41:57.273801 6 log.go:172] (0xc0027b2bb0) Reply frame received for 1 I0701 12:41:57.273844 6 log.go:172] (0xc0027b2bb0) (0xc00196a640) Create stream I0701 12:41:57.273867 6 log.go:172] (0xc0027b2bb0) (0xc00196a640) Stream added, broadcasting: 3 I0701 12:41:57.275042 6 log.go:172] (0xc0027b2bb0) Reply frame received for 3 I0701 12:41:57.275078 6 log.go:172] (0xc0027b2bb0) (0xc000d2d4a0) Create stream I0701 12:41:57.275096 6 log.go:172] (0xc0027b2bb0) (0xc000d2d4a0) Stream added, broadcasting: 5 I0701 12:41:57.276142 6 log.go:172] (0xc0027b2bb0) Reply frame received for 5 I0701 12:41:57.360213 6 log.go:172] (0xc0027b2bb0) Data frame received for 3 I0701 12:41:57.360235 6 log.go:172] (0xc00196a640) (3) Data frame handling I0701 12:41:57.360242 6 log.go:172] (0xc00196a640) (3) Data frame sent I0701 12:41:57.360247 6 log.go:172] (0xc0027b2bb0) Data frame received for 3 I0701 12:41:57.360251 6 log.go:172] (0xc00196a640) (3) Data frame handling I0701 12:41:57.360270 6 log.go:172] (0xc0027b2bb0) Data frame received for 5 I0701 12:41:57.360281 6 log.go:172] (0xc000d2d4a0) (5) Data frame handling I0701 12:41:57.361773 6 log.go:172] (0xc0027b2bb0) Data frame received for 1 I0701 12:41:57.361789 6 log.go:172] (0xc001c82dc0) (1) Data frame handling I0701 12:41:57.361798 6 log.go:172] (0xc001c82dc0) (1) Data frame sent I0701 12:41:57.361811 6 log.go:172] (0xc0027b2bb0) (0xc001c82dc0) Stream removed, broadcasting: 1 I0701 12:41:57.361826 6 log.go:172] (0xc0027b2bb0) Go away received I0701 12:41:57.361911 6 log.go:172] (0xc0027b2bb0) (0xc001c82dc0) Stream removed, broadcasting: 1 I0701 12:41:57.361927 6 log.go:172] (0xc0027b2bb0) (0xc00196a640) Stream removed, broadcasting: 3 I0701 12:41:57.361937 6 log.go:172] (0xc0027b2bb0) (0xc000d2d4a0) Stream removed, broadcasting: 5 Jul 1 12:41:57.361: INFO: Exec stderr: "" Jul 1 12:41:57.361: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2739 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:41:57.361: INFO: >>> kubeConfig: /root/.kube/config I0701 12:41:57.385336 6 log.go:172] (0xc00351a6e0) (0xc00196a960) Create stream I0701 12:41:57.385367 6 log.go:172] (0xc00351a6e0) (0xc00196a960) Stream added, broadcasting: 1 I0701 12:41:57.387149 6 log.go:172] (0xc00351a6e0) Reply frame received for 1 I0701 12:41:57.387181 6 log.go:172] (0xc00351a6e0) (0xc000f3c960) Create stream I0701 12:41:57.387193 6 log.go:172] (0xc00351a6e0) (0xc000f3c960) Stream added, broadcasting: 3 I0701 12:41:57.387897 6 log.go:172] (0xc00351a6e0) Reply frame received for 3 I0701 12:41:57.387938 6 log.go:172] (0xc00351a6e0) (0xc000f3ca00) Create stream I0701 12:41:57.387948 6 log.go:172] (0xc00351a6e0) (0xc000f3ca00) Stream added, broadcasting: 5 I0701 12:41:57.388714 6 log.go:172] (0xc00351a6e0) Reply frame received for 5 I0701 12:41:57.585806 6 log.go:172] (0xc00351a6e0) Data frame received for 3 I0701 12:41:57.585826 6 log.go:172] (0xc000f3c960) (3) Data frame handling I0701 12:41:57.585838 6 log.go:172] (0xc000f3c960) (3) Data frame sent I0701 12:41:57.585855 6 log.go:172] (0xc00351a6e0) Data frame received for 3 I0701 12:41:57.585862 6 log.go:172] (0xc000f3c960) (3) Data frame handling I0701 12:41:57.585875 6 log.go:172] (0xc00351a6e0) Data frame received for 5 I0701 12:41:57.585882 6 log.go:172] (0xc000f3ca00) (5) Data frame handling I0701 12:41:57.587097 6 log.go:172] (0xc00351a6e0) Data frame received for 1 I0701 12:41:57.587108 6 log.go:172] (0xc00196a960) (1) Data frame handling I0701 12:41:57.587115 6 log.go:172] (0xc00196a960) (1) Data frame sent I0701 12:41:57.587121 6 log.go:172] (0xc00351a6e0) (0xc00196a960) Stream removed, broadcasting: 1 I0701 12:41:57.587169 6 log.go:172] (0xc00351a6e0) (0xc00196a960) Stream removed, broadcasting: 1 I0701 12:41:57.587183 6 log.go:172] (0xc00351a6e0) (0xc000f3c960) Stream removed, broadcasting: 3 I0701 12:41:57.587268 6 log.go:172] (0xc00351a6e0) (0xc000f3ca00) Stream removed, broadcasting: 5 I0701 12:41:57.587298 6 log.go:172] (0xc00351a6e0) Go away received Jul 1 12:41:57.587: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:41:57.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2739" for this suite. • [SLOW TEST:13.272 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":584,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:41:57.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 1 12:41:57.691: INFO: Waiting up to 5m0s for pod "pod-672af366-8927-416f-b48d-0f138bcd16f4" in namespace "emptydir-449" to be "success or failure" Jul 1 12:41:57.710: INFO: Pod "pod-672af366-8927-416f-b48d-0f138bcd16f4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.664468ms Jul 1 12:41:59.743: INFO: Pod "pod-672af366-8927-416f-b48d-0f138bcd16f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051723134s Jul 1 12:42:01.747: INFO: Pod "pod-672af366-8927-416f-b48d-0f138bcd16f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056081894s STEP: Saw pod success Jul 1 12:42:01.747: INFO: Pod "pod-672af366-8927-416f-b48d-0f138bcd16f4" satisfied condition "success or failure" Jul 1 12:42:01.750: INFO: Trying to get logs from node jerma-worker2 pod pod-672af366-8927-416f-b48d-0f138bcd16f4 container test-container: STEP: delete the pod Jul 1 12:42:01.771: INFO: Waiting for pod pod-672af366-8927-416f-b48d-0f138bcd16f4 to disappear Jul 1 12:42:01.790: INFO: Pod pod-672af366-8927-416f-b48d-0f138bcd16f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:01.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-449" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":603,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:01.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 12:42:01.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6221' Jul 1 12:42:05.085: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 12:42:05.085: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Jul 1 12:42:07.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6221' Jul 1 12:42:07.308: INFO: stderr: "" Jul 1 12:42:07.308: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:07.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6221" for this suite. • [SLOW TEST:5.529 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":40,"skipped":604,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:07.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:11.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2734" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":619,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:11.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-215a3be5-5774-4f1c-8ea4-586fb774236b STEP: Creating a pod to test consume configMaps Jul 1 12:42:11.679: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db" in namespace "configmap-1608" to be "success or failure" Jul 1 12:42:11.692: INFO: Pod "pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db": Phase="Pending", Reason="", readiness=false. Elapsed: 12.917374ms Jul 1 12:42:13.700: INFO: Pod "pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021056344s Jul 1 12:42:15.706: INFO: Pod "pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026395695s STEP: Saw pod success Jul 1 12:42:15.706: INFO: Pod "pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db" satisfied condition "success or failure" Jul 1 12:42:15.710: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db container configmap-volume-test: STEP: delete the pod Jul 1 12:42:15.729: INFO: Waiting for pod pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db to disappear Jul 1 12:42:15.775: INFO: Pod pod-configmaps-8c488ef8-6c48-45c0-91d9-f419432cc7db no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:15.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1608" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":628,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:15.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:22.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-30" for this suite. • [SLOW TEST:7.143 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":43,"skipped":634,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:22.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 1 12:42:22.994: INFO: Waiting up to 5m0s for pod "downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004" in namespace "downward-api-5337" to be "success or failure" Jul 1 12:42:23.025: INFO: Pod "downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004": Phase="Pending", Reason="", readiness=false. Elapsed: 30.693112ms Jul 1 12:42:25.033: INFO: Pod "downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038957293s Jul 1 12:42:27.038: INFO: Pod "downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043325672s STEP: Saw pod success Jul 1 12:42:27.038: INFO: Pod "downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004" satisfied condition "success or failure" Jul 1 12:42:27.041: INFO: Trying to get logs from node jerma-worker2 pod downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004 container dapi-container: STEP: delete the pod Jul 1 12:42:27.109: INFO: Waiting for pod downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004 to disappear Jul 1 12:42:27.150: INFO: Pod downward-api-61af4c26-0a84-412c-b2a0-5ead08e21004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:27.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5337" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:27.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:27.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1721" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":45,"skipped":666,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:27.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:42:27.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3" in namespace "downward-api-746" to be "success or failure" Jul 1 12:42:27.560: INFO: Pod "downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 51.487618ms Jul 1 12:42:29.584: INFO: Pod "downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075574562s Jul 1 12:42:31.591: INFO: Pod "downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08175626s STEP: Saw pod success Jul 1 12:42:31.591: INFO: Pod "downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3" satisfied condition "success or failure" Jul 1 12:42:31.594: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3 container client-container: STEP: delete the pod Jul 1 12:42:31.610: INFO: Waiting for pod downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3 to disappear Jul 1 12:42:31.614: INFO: Pod downwardapi-volume-0fce44b0-51ea-4c45-afca-5db7d3872ad3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:31.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-746" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":683,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:31.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:42:31.693: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7" in namespace "downward-api-7788" to be "success or failure" Jul 1 12:42:31.733: INFO: Pod "downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7": Phase="Pending", Reason="", readiness=false. Elapsed: 39.288342ms Jul 1 12:42:33.738: INFO: Pod "downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044737439s Jul 1 12:42:35.779: INFO: Pod "downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085721446s Jul 1 12:42:37.783: INFO: Pod "downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090151977s STEP: Saw pod success Jul 1 12:42:37.783: INFO: Pod "downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7" satisfied condition "success or failure" Jul 1 12:42:37.787: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7 container client-container: STEP: delete the pod Jul 1 12:42:37.827: INFO: Waiting for pod downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7 to disappear Jul 1 12:42:37.830: INFO: Pod downwardapi-volume-fd644d1e-56f1-45dc-9c98-7d3a22b5fcb7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:37.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7788" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:37.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-a453f8bc-ab66-488a-954e-f12b09c872ad STEP: Creating a pod to test consume secrets Jul 1 12:42:37.952: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e" in namespace "projected-964" to be "success or failure" Jul 1 12:42:37.962: INFO: Pod "pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.345383ms Jul 1 12:42:39.966: INFO: Pod "pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014593448s Jul 1 12:42:41.970: INFO: Pod "pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018250598s STEP: Saw pod success Jul 1 12:42:41.970: INFO: Pod "pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e" satisfied condition "success or failure" Jul 1 12:42:41.973: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e container projected-secret-volume-test: STEP: delete the pod Jul 1 12:42:42.025: INFO: Waiting for pod pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e to disappear Jul 1 12:42:42.029: INFO: Pod pod-projected-secrets-7b7fe74f-51bb-4681-895a-183f9324405e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:42.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-964" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:42.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:42:42.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211" in namespace "projected-3523" to be "success or failure" Jul 1 12:42:42.130: INFO: Pod "downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211": Phase="Pending", Reason="", readiness=false. Elapsed: 16.382341ms Jul 1 12:42:44.133: INFO: Pod "downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020034875s Jul 1 12:42:46.138: INFO: Pod "downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024349325s STEP: Saw pod success Jul 1 12:42:46.138: INFO: Pod "downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211" satisfied condition "success or failure" Jul 1 12:42:46.141: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211 container client-container: STEP: delete the pod Jul 1 12:42:46.231: INFO: Waiting for pod downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211 to disappear Jul 1 12:42:46.253: INFO: Pod downwardapi-volume-9832cd1a-bd20-42c2-b2eb-00c725ba3211 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:46.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3523" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:46.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f39d243a-0500-406b-a41a-341b7c3719b1 STEP: Creating a pod to test consume secrets Jul 1 12:42:46.443: INFO: Waiting up to 5m0s for pod "pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963" in namespace "secrets-172" to be "success or failure" Jul 1 12:42:46.446: INFO: Pod "pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134875ms Jul 1 12:42:48.451: INFO: Pod "pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007317199s Jul 1 12:42:50.455: INFO: Pod "pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963": Phase="Running", Reason="", readiness=true. Elapsed: 4.011674947s Jul 1 12:42:52.458: INFO: Pod "pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01512016s STEP: Saw pod success Jul 1 12:42:52.458: INFO: Pod "pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963" satisfied condition "success or failure" Jul 1 12:42:52.461: INFO: Trying to get logs from node jerma-worker pod pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963 container secret-volume-test: STEP: delete the pod Jul 1 12:42:52.521: INFO: Waiting for pod pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963 to disappear Jul 1 12:42:52.526: INFO: Pod pod-secrets-6c513216-58e4-427a-bdc2-20785e6f2963 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:52.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-172" for this suite. • [SLOW TEST:6.273 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":799,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:52.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:42:52.684: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e072e9db-cc6f-4d16-9c3e-2ac4a872f979", Controller:(*bool)(0xc0056c20b2), BlockOwnerDeletion:(*bool)(0xc0056c20b3)}} Jul 1 12:42:52.694: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d37a18cb-f9aa-4415-9b07-02fe94cda146", Controller:(*bool)(0xc0026f3e9a), BlockOwnerDeletion:(*bool)(0xc0026f3e9b)}} Jul 1 12:42:52.719: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ea8dafcd-51bc-4654-93d5-484334efcd0f", Controller:(*bool)(0xc000ceb442), BlockOwnerDeletion:(*bool)(0xc000ceb443)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:42:57.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8889" for this suite. • [SLOW TEST:5.206 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":51,"skipped":805,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:42:57.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jul 1 12:42:57.810: INFO: Waiting up to 5m0s for pod "client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc" in namespace "containers-6653" to be "success or failure" Jul 1 12:42:57.863: INFO: Pod "client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 53.565295ms Jul 1 12:42:59.870: INFO: Pod "client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060574787s Jul 1 12:43:01.874: INFO: Pod "client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064439964s STEP: Saw pod success Jul 1 12:43:01.874: INFO: Pod "client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc" satisfied condition "success or failure" Jul 1 12:43:01.877: INFO: Trying to get logs from node jerma-worker2 pod client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc container test-container: STEP: delete the pod Jul 1 12:43:02.036: INFO: Waiting for pod client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc to disappear Jul 1 12:43:02.151: INFO: Pod client-containers-18d48c1d-e28a-4092-8b02-6ab05b1cf4dc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:43:02.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6653" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:43:02.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:43:02.246: INFO: Create a RollingUpdate DaemonSet Jul 1 12:43:02.249: INFO: Check that daemon pods launch on every node of the cluster Jul 1 12:43:02.277: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:02.293: INFO: Number of nodes with available pods: 0 Jul 1 12:43:02.293: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:43:03.300: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:03.310: INFO: Number of nodes with available pods: 0 Jul 1 12:43:03.310: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:43:04.299: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:04.303: INFO: Number of nodes with available pods: 0 Jul 1 12:43:04.303: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:43:05.299: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:05.303: INFO: Number of nodes with available pods: 1 Jul 1 12:43:05.303: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:43:06.299: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:06.304: INFO: Number of nodes with available pods: 2 Jul 1 12:43:06.304: INFO: Number of running nodes: 2, number of available pods: 2 Jul 1 12:43:06.304: INFO: Update the DaemonSet to trigger a rollout Jul 1 12:43:06.311: INFO: Updating DaemonSet daemon-set Jul 1 12:43:19.388: INFO: Roll back the DaemonSet before rollout is complete Jul 1 12:43:19.494: INFO: Updating DaemonSet daemon-set Jul 1 12:43:19.494: INFO: Make sure DaemonSet rollback is complete Jul 1 12:43:19.515: INFO: Wrong image for pod: daemon-set-82947. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 12:43:19.515: INFO: Pod daemon-set-82947 is not available Jul 1 12:43:19.546: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:20.551: INFO: Wrong image for pod: daemon-set-82947. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 12:43:20.551: INFO: Pod daemon-set-82947 is not available Jul 1 12:43:20.556: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:21.833: INFO: Wrong image for pod: daemon-set-82947. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 12:43:21.833: INFO: Pod daemon-set-82947 is not available Jul 1 12:43:21.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:43:22.552: INFO: Pod daemon-set-zqthz is not available Jul 1 12:43:22.556: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2376, will wait for the garbage collector to delete the pods Jul 1 12:43:22.624: INFO: Deleting DaemonSet.extensions daemon-set took: 7.844984ms Jul 1 12:43:23.024: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.300696ms Jul 1 12:43:26.127: INFO: Number of nodes with available pods: 0 Jul 1 12:43:26.128: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 12:43:26.131: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2376/daemonsets","resourceVersion":"28775925"},"items":null} Jul 1 12:43:26.134: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2376/pods","resourceVersion":"28775925"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:43:26.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2376" for this suite. • [SLOW TEST:24.016 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":53,"skipped":845,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:43:26.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:43:26.242: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331" in namespace "projected-5996" to be "success or failure" Jul 1 12:43:26.245: INFO: Pod "downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838929ms Jul 1 12:43:28.398: INFO: Pod "downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155138853s Jul 1 12:43:30.403: INFO: Pod "downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160116123s STEP: Saw pod success Jul 1 12:43:30.403: INFO: Pod "downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331" satisfied condition "success or failure" Jul 1 12:43:30.406: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331 container client-container: STEP: delete the pod Jul 1 12:43:30.449: INFO: Waiting for pod downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331 to disappear Jul 1 12:43:30.455: INFO: Pod downwardapi-volume-75c3e48a-a14e-43d9-8f08-ee567090b331 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:43:30.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5996" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:43:30.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:43:30.530: INFO: Creating deployment "test-recreate-deployment" Jul 1 12:43:30.533: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 1 12:43:30.562: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jul 1 12:43:32.639: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 1 12:43:32.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204210, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204210, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204210, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204210, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 12:43:34.662: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 1 12:43:34.690: INFO: Updating deployment test-recreate-deployment Jul 1 12:43:34.690: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 1 12:43:36.549: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1003 /apis/apps/v1/namespaces/deployment-1003/deployments/test-recreate-deployment 10e74f65-f419-4d0a-beab-e281186f316d 28776038 2 2020-07-01 12:43:30 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0021621f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-01 12:43:36 +0000 UTC,LastTransitionTime:2020-07-01 12:43:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-07-01 12:43:36 +0000 UTC,LastTransitionTime:2020-07-01 12:43:30 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jul 1 12:43:36.761: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1003 /apis/apps/v1/namespaces/deployment-1003/replicasets/test-recreate-deployment-5f94c574ff 2b34c0e3-53c9-4d55-92b4-88452e3a5bc2 28776037 1 2020-07-01 12:43:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 10e74f65-f419-4d0a-beab-e281186f316d 0xc00222b237 0xc00222b238}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00222b298 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 12:43:36.761: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 1 12:43:36.762: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1003 /apis/apps/v1/namespaces/deployment-1003/replicasets/test-recreate-deployment-799c574856 a6dbe355-eafc-41a8-bed5-0a7d28400b46 28776025 2 2020-07-01 12:43:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 10e74f65-f419-4d0a-beab-e281186f316d 0xc00222b307 0xc00222b308}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00222b378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 12:43:36.766: INFO: Pod "test-recreate-deployment-5f94c574ff-ch97k" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-ch97k test-recreate-deployment-5f94c574ff- deployment-1003 /api/v1/namespaces/deployment-1003/pods/test-recreate-deployment-5f94c574ff-ch97k ca25f8b2-cd67-423e-bc10-d92f761374d5 28776040 0 2020-07-01 12:43:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 2b34c0e3-53c9-4d55-92b4-88452e3a5bc2 0xc00222b7b7 0xc00222b7b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf2z8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf2z8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf2z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:43:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:43:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-01 12:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:43:36.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1003" for this suite. • [SLOW TEST:6.363 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":55,"skipped":917,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:43:36.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 1 12:43:37.582: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 1 12:43:39.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204217, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204217, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204217, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204217, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 12:43:42.639: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:43:42.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:43:44.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2824" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.439 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":56,"skipped":926,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:43:44.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7010 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7010 Jul 1 12:43:44.395: INFO: Found 0 stateful pods, waiting for 1 Jul 1 12:43:54.401: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 1 12:43:54.426: INFO: Deleting all statefulset in ns statefulset-7010 Jul 1 12:43:54.461: INFO: Scaling statefulset ss to 0 Jul 1 12:44:24.518: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 12:44:24.521: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:44:24.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7010" for this suite. • [SLOW TEST:40.277 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":57,"skipped":929,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:44:24.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 12:44:25.495: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 12:44:27.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204265, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204266, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204265, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 12:44:29.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204265, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204266, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204265, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 12:44:32.558: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:44:32.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2997" for this suite. STEP: Destroying namespace "webhook-2997-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.296 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":58,"skipped":936,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:44:32.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jul 1 12:44:33.311: INFO: Waiting up to 5m0s for pod "client-containers-83aad22c-1f06-44b1-94c5-da2df7891007" in namespace "containers-5596" to be "success or failure" Jul 1 12:44:33.469: INFO: Pod "client-containers-83aad22c-1f06-44b1-94c5-da2df7891007": Phase="Pending", Reason="", readiness=false. Elapsed: 158.250833ms Jul 1 12:44:35.473: INFO: Pod "client-containers-83aad22c-1f06-44b1-94c5-da2df7891007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162397697s Jul 1 12:44:38.716: INFO: Pod "client-containers-83aad22c-1f06-44b1-94c5-da2df7891007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.40544214s Jul 1 12:44:40.963: INFO: Pod "client-containers-83aad22c-1f06-44b1-94c5-da2df7891007": Phase="Running", Reason="", readiness=true. Elapsed: 7.652032575s Jul 1 12:44:42.966: INFO: Pod "client-containers-83aad22c-1f06-44b1-94c5-da2df7891007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.655450679s STEP: Saw pod success Jul 1 12:44:42.966: INFO: Pod "client-containers-83aad22c-1f06-44b1-94c5-da2df7891007" satisfied condition "success or failure" Jul 1 12:44:42.978: INFO: Trying to get logs from node jerma-worker pod client-containers-83aad22c-1f06-44b1-94c5-da2df7891007 container test-container: STEP: delete the pod Jul 1 12:44:43.016: INFO: Waiting for pod client-containers-83aad22c-1f06-44b1-94c5-da2df7891007 to disappear Jul 1 12:44:43.032: INFO: Pod client-containers-83aad22c-1f06-44b1-94c5-da2df7891007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:44:43.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5596" for this suite. • [SLOW TEST:10.198 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:44:43.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 12:44:43.590: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 12:44:45.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204283, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204283, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204284, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204283, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 12:44:47.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204283, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204283, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204284, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204283, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 12:44:50.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jul 1 12:44:54.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2150 to-be-attached-pod -i -c=container1' Jul 1 12:44:54.970: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:44:54.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2150" for this suite. STEP: Destroying namespace "webhook-2150-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.054 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":60,"skipped":965,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:44:55.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8471 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 12:44:55.165: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 12:45:23.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.205:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:45:23.433: INFO: >>> kubeConfig: /root/.kube/config I0701 12:45:23.464707 6 log.go:172] (0xc0020ab6b0) (0xc00091d860) Create stream I0701 12:45:23.464747 6 log.go:172] (0xc0020ab6b0) (0xc00091d860) Stream added, broadcasting: 1 I0701 12:45:23.466550 6 log.go:172] (0xc0020ab6b0) Reply frame received for 1 I0701 12:45:23.466592 6 log.go:172] (0xc0020ab6b0) (0xc0020b4000) Create stream I0701 12:45:23.466603 6 log.go:172] (0xc0020ab6b0) (0xc0020b4000) Stream added, broadcasting: 3 I0701 12:45:23.467291 6 log.go:172] (0xc0020ab6b0) Reply frame received for 3 I0701 12:45:23.467322 6 log.go:172] (0xc0020ab6b0) (0xc00091d9a0) Create stream I0701 12:45:23.467331 6 log.go:172] (0xc0020ab6b0) (0xc00091d9a0) Stream added, broadcasting: 5 I0701 12:45:23.468134 6 log.go:172] (0xc0020ab6b0) Reply frame received for 5 I0701 12:45:23.683787 6 log.go:172] (0xc0020ab6b0) Data frame received for 5 I0701 12:45:23.683832 6 log.go:172] (0xc00091d9a0) (5) Data frame handling I0701 12:45:23.683863 6 log.go:172] (0xc0020ab6b0) Data frame received for 3 I0701 12:45:23.683892 6 log.go:172] (0xc0020b4000) (3) Data frame handling I0701 12:45:23.683917 6 log.go:172] (0xc0020b4000) (3) Data frame sent I0701 12:45:23.683929 6 log.go:172] (0xc0020ab6b0) Data frame received for 3 I0701 12:45:23.683940 6 log.go:172] (0xc0020b4000) (3) Data frame handling I0701 12:45:23.686332 6 log.go:172] (0xc0020ab6b0) Data frame received for 1 I0701 12:45:23.686359 6 log.go:172] (0xc00091d860) (1) Data frame handling I0701 12:45:23.686376 6 log.go:172] (0xc00091d860) (1) Data frame sent I0701 12:45:23.686391 6 log.go:172] (0xc0020ab6b0) (0xc00091d860) Stream removed, broadcasting: 1 I0701 12:45:23.686406 6 log.go:172] (0xc0020ab6b0) Go away received I0701 12:45:23.686577 6 log.go:172] (0xc0020ab6b0) (0xc00091d860) Stream removed, broadcasting: 1 I0701 12:45:23.686607 6 log.go:172] (0xc0020ab6b0) (0xc0020b4000) Stream removed, broadcasting: 3 I0701 12:45:23.686620 6 log.go:172] (0xc0020ab6b0) (0xc00091d9a0) Stream removed, broadcasting: 5 Jul 1 12:45:23.686: INFO: Found all expected endpoints: [netserver-0] Jul 1 12:45:23.698: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.238:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 12:45:23.698: INFO: >>> kubeConfig: /root/.kube/config I0701 12:45:23.736388 6 log.go:172] (0xc0033184d0) (0xc000e332c0) Create stream I0701 12:45:23.736415 6 log.go:172] (0xc0033184d0) (0xc000e332c0) Stream added, broadcasting: 1 I0701 12:45:23.737993 6 log.go:172] (0xc0033184d0) Reply frame received for 1 I0701 12:45:23.738019 6 log.go:172] (0xc0033184d0) (0xc0027d0640) Create stream I0701 12:45:23.738029 6 log.go:172] (0xc0033184d0) (0xc0027d0640) Stream added, broadcasting: 3 I0701 12:45:23.738569 6 log.go:172] (0xc0033184d0) Reply frame received for 3 I0701 12:45:23.738592 6 log.go:172] (0xc0033184d0) (0xc0020b40a0) Create stream I0701 12:45:23.738600 6 log.go:172] (0xc0033184d0) (0xc0020b40a0) Stream added, broadcasting: 5 I0701 12:45:23.739298 6 log.go:172] (0xc0033184d0) Reply frame received for 5 I0701 12:45:23.799019 6 log.go:172] (0xc0033184d0) Data frame received for 3 I0701 12:45:23.799059 6 log.go:172] (0xc0027d0640) (3) Data frame handling I0701 12:45:23.799087 6 log.go:172] (0xc0027d0640) (3) Data frame sent I0701 12:45:23.799107 6 log.go:172] (0xc0033184d0) Data frame received for 3 I0701 12:45:23.799126 6 log.go:172] (0xc0027d0640) (3) Data frame handling I0701 12:45:23.799295 6 log.go:172] (0xc0033184d0) Data frame received for 5 I0701 12:45:23.799312 6 log.go:172] (0xc0020b40a0) (5) Data frame handling I0701 12:45:23.800771 6 log.go:172] (0xc0033184d0) Data frame received for 1 I0701 12:45:23.800805 6 log.go:172] (0xc000e332c0) (1) Data frame handling I0701 12:45:23.800832 6 log.go:172] (0xc000e332c0) (1) Data frame sent I0701 12:45:23.800846 6 log.go:172] (0xc0033184d0) (0xc000e332c0) Stream removed, broadcasting: 1 I0701 12:45:23.800878 6 log.go:172] (0xc0033184d0) Go away received I0701 12:45:23.801037 6 log.go:172] (0xc0033184d0) (0xc000e332c0) Stream removed, broadcasting: 1 I0701 12:45:23.801092 6 log.go:172] (0xc0033184d0) (0xc0027d0640) Stream removed, broadcasting: 3 I0701 12:45:23.801331 6 log.go:172] (0xc0033184d0) (0xc0020b40a0) Stream removed, broadcasting: 5 Jul 1 12:45:23.801: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:45:23.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8471" for this suite. • [SLOW TEST:28.718 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":979,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:45:23.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-cz6m STEP: Creating a pod to test atomic-volume-subpath Jul 1 12:45:23.886: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cz6m" in namespace "subpath-9603" to be "success or failure" Jul 1 12:45:23.889: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Pending", Reason="", readiness=false. Elapsed: 3.473476ms Jul 1 12:45:26.075: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189535206s Jul 1 12:45:28.079: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 4.193173843s Jul 1 12:45:30.087: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 6.201434471s Jul 1 12:45:32.231: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 8.345704657s Jul 1 12:45:34.235: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 10.349009083s Jul 1 12:45:36.238: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 12.352395696s Jul 1 12:45:38.243: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 14.357450875s Jul 1 12:45:40.247: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 16.361856405s Jul 1 12:45:42.252: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 18.366426019s Jul 1 12:45:44.257: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 20.370979844s Jul 1 12:45:46.273: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 22.387281032s Jul 1 12:45:48.277: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Running", Reason="", readiness=true. Elapsed: 24.391835845s Jul 1 12:45:50.282: INFO: Pod "pod-subpath-test-secret-cz6m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.396765487s STEP: Saw pod success Jul 1 12:45:50.282: INFO: Pod "pod-subpath-test-secret-cz6m" satisfied condition "success or failure" Jul 1 12:45:50.286: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-cz6m container test-container-subpath-secret-cz6m: STEP: delete the pod Jul 1 12:45:50.311: INFO: Waiting for pod pod-subpath-test-secret-cz6m to disappear Jul 1 12:45:50.327: INFO: Pod pod-subpath-test-secret-cz6m no longer exists STEP: Deleting pod pod-subpath-test-secret-cz6m Jul 1 12:45:50.327: INFO: Deleting pod "pod-subpath-test-secret-cz6m" in namespace "subpath-9603" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:45:50.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9603" for this suite. • [SLOW TEST:26.525 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":62,"skipped":979,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:45:50.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4204/secret-test-0d19b4f4-a25d-4c53-89db-49dc1dc1f4ca STEP: Creating a pod to test consume secrets Jul 1 12:45:50.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c" in namespace "secrets-4204" to be "success or failure" Jul 1 12:45:50.773: INFO: Pod "pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.025496ms Jul 1 12:45:52.788: INFO: Pod "pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034813579s Jul 1 12:45:54.793: INFO: Pod "pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039554183s STEP: Saw pod success Jul 1 12:45:54.793: INFO: Pod "pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c" satisfied condition "success or failure" Jul 1 12:45:54.796: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c container env-test: STEP: delete the pod Jul 1 12:45:54.836: INFO: Waiting for pod pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c to disappear Jul 1 12:45:54.842: INFO: Pod pod-configmaps-294125a0-01e7-46c4-b4f2-c173745ecd1c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:45:54.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4204" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":991,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:45:54.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jul 1 12:45:54.909: INFO: namespace kubectl-5381 Jul 1 12:45:54.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5381' Jul 1 12:45:55.276: INFO: stderr: "" Jul 1 12:45:55.276: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 1 12:45:56.333: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 12:45:56.333: INFO: Found 0 / 1 Jul 1 12:45:57.351: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 12:45:57.351: INFO: Found 0 / 1 Jul 1 12:45:58.280: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 12:45:58.280: INFO: Found 1 / 1 Jul 1 12:45:58.280: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 12:45:58.282: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 12:45:58.282: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 12:45:58.282: INFO: wait on agnhost-master startup in kubectl-5381 Jul 1 12:45:58.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-8d89j agnhost-master --namespace=kubectl-5381' Jul 1 12:45:58.407: INFO: stderr: "" Jul 1 12:45:58.407: INFO: stdout: "Paused\n" STEP: exposing RC Jul 1 12:45:58.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5381' Jul 1 12:45:58.628: INFO: stderr: "" Jul 1 12:45:58.628: INFO: stdout: "service/rm2 exposed\n" Jul 1 12:45:58.658: INFO: Service rm2 in namespace kubectl-5381 found. STEP: exposing service Jul 1 12:46:00.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5381' Jul 1 12:46:01.263: INFO: stderr: "" Jul 1 12:46:01.263: INFO: stdout: "service/rm3 exposed\n" Jul 1 12:46:01.430: INFO: Service rm3 in namespace kubectl-5381 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:46:03.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5381" for this suite. • [SLOW TEST:8.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":64,"skipped":1001,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:46:03.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:46:03.522: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8316 I0701 12:46:03.545641 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8316, replica count: 1 I0701 12:46:04.596037 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 12:46:05.596243 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 12:46:06.596492 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 12:46:06.736: INFO: Created: latency-svc-lkrfp Jul 1 12:46:06.749: INFO: Got endpoints: latency-svc-lkrfp [53.028537ms] Jul 1 12:46:06.843: INFO: Created: latency-svc-7s8kl Jul 1 12:46:06.856: INFO: Got endpoints: latency-svc-7s8kl [106.809956ms] Jul 1 12:46:06.880: INFO: Created: latency-svc-4kbwm Jul 1 12:46:06.893: INFO: Got endpoints: latency-svc-4kbwm [143.328503ms] Jul 1 12:46:06.920: INFO: Created: latency-svc-5t6b5 Jul 1 12:46:06.979: INFO: Got endpoints: latency-svc-5t6b5 [229.800488ms] Jul 1 12:46:06.981: INFO: Created: latency-svc-k9jgf Jul 1 12:46:06.987: INFO: Got endpoints: latency-svc-k9jgf [238.070946ms] Jul 1 12:46:07.041: INFO: Created: latency-svc-7ctqh Jul 1 12:46:07.154: INFO: Got endpoints: latency-svc-7ctqh [403.802863ms] Jul 1 12:46:07.155: INFO: Created: latency-svc-fxbtg Jul 1 12:46:07.169: INFO: Got endpoints: latency-svc-fxbtg [419.100136ms] Jul 1 12:46:07.238: INFO: Created: latency-svc-b62tc Jul 1 12:46:07.247: INFO: Got endpoints: latency-svc-b62tc [496.934497ms] Jul 1 12:46:07.297: INFO: Created: latency-svc-wpxtn Jul 1 12:46:07.308: INFO: Got endpoints: latency-svc-wpxtn [557.820584ms] Jul 1 12:46:07.347: INFO: Created: latency-svc-jnsrk Jul 1 12:46:07.362: INFO: Got endpoints: latency-svc-jnsrk [612.751862ms] Jul 1 12:46:07.395: INFO: Created: latency-svc-m2gr6 Jul 1 12:46:07.465: INFO: Got endpoints: latency-svc-m2gr6 [714.881841ms] Jul 1 12:46:07.484: INFO: Created: latency-svc-lqw2x Jul 1 12:46:07.500: INFO: Got endpoints: latency-svc-lqw2x [749.399958ms] Jul 1 12:46:07.521: INFO: Created: latency-svc-9bvrb Jul 1 12:46:07.536: INFO: Got endpoints: latency-svc-9bvrb [785.398302ms] Jul 1 12:46:07.608: INFO: Created: latency-svc-slq2q Jul 1 12:46:07.688: INFO: Got endpoints: latency-svc-slq2q [937.562959ms] Jul 1 12:46:07.693: INFO: Created: latency-svc-45hk7 Jul 1 12:46:07.758: INFO: Got endpoints: latency-svc-45hk7 [1.007580935s] Jul 1 12:46:07.790: INFO: Created: latency-svc-w6l8c Jul 1 12:46:07.809: INFO: Got endpoints: latency-svc-w6l8c [1.059765823s] Jul 1 12:46:07.845: INFO: Created: latency-svc-ltm57 Jul 1 12:46:07.902: INFO: Got endpoints: latency-svc-ltm57 [1.045650796s] Jul 1 12:46:07.978: INFO: Created: latency-svc-xfp5g Jul 1 12:46:07.979: INFO: Got endpoints: latency-svc-xfp5g [1.085606397s] Jul 1 12:46:08.067: INFO: Created: latency-svc-fvkhz Jul 1 12:46:08.083: INFO: Got endpoints: latency-svc-fvkhz [1.103292756s] Jul 1 12:46:08.134: INFO: Created: latency-svc-j4j28 Jul 1 12:46:08.237: INFO: Got endpoints: latency-svc-j4j28 [1.249474527s] Jul 1 12:46:08.238: INFO: Created: latency-svc-zz9c7 Jul 1 12:46:08.251: INFO: Got endpoints: latency-svc-zz9c7 [1.096870132s] Jul 1 12:46:08.277: INFO: Created: latency-svc-7lg4s Jul 1 12:46:08.293: INFO: Got endpoints: latency-svc-7lg4s [1.12428138s] Jul 1 12:46:08.383: INFO: Created: latency-svc-z8f9t Jul 1 12:46:08.432: INFO: Got endpoints: latency-svc-z8f9t [1.184662918s] Jul 1 12:46:08.433: INFO: Created: latency-svc-g7pvl Jul 1 12:46:08.444: INFO: Got endpoints: latency-svc-g7pvl [1.136777485s] Jul 1 12:46:08.463: INFO: Created: latency-svc-jdnsz Jul 1 12:46:08.475: INFO: Got endpoints: latency-svc-jdnsz [1.112899102s] Jul 1 12:46:08.518: INFO: Created: latency-svc-x4qfm Jul 1 12:46:08.529: INFO: Got endpoints: latency-svc-x4qfm [1.063918418s] Jul 1 12:46:08.554: INFO: Created: latency-svc-pk6jv Jul 1 12:46:08.595: INFO: Got endpoints: latency-svc-pk6jv [1.095627537s] Jul 1 12:46:08.711: INFO: Created: latency-svc-vcmd6 Jul 1 12:46:08.714: INFO: Got endpoints: latency-svc-vcmd6 [1.178418239s] Jul 1 12:46:08.770: INFO: Created: latency-svc-4p6x4 Jul 1 12:46:08.794: INFO: Got endpoints: latency-svc-4p6x4 [1.105496566s] Jul 1 12:46:08.866: INFO: Created: latency-svc-6qgxg Jul 1 12:46:08.868: INFO: Got endpoints: latency-svc-6qgxg [1.110676237s] Jul 1 12:46:08.925: INFO: Created: latency-svc-t8gzh Jul 1 12:46:08.955: INFO: Got endpoints: latency-svc-t8gzh [1.14551547s] Jul 1 12:46:09.010: INFO: Created: latency-svc-7mt74 Jul 1 12:46:09.015: INFO: Got endpoints: latency-svc-7mt74 [1.112954715s] Jul 1 12:46:09.033: INFO: Created: latency-svc-c2lzm Jul 1 12:46:09.068: INFO: Got endpoints: latency-svc-c2lzm [1.089908482s] Jul 1 12:46:09.157: INFO: Created: latency-svc-rp8fm Jul 1 12:46:09.206: INFO: Got endpoints: latency-svc-rp8fm [1.123259019s] Jul 1 12:46:09.207: INFO: Created: latency-svc-v2zrn Jul 1 12:46:09.226: INFO: Got endpoints: latency-svc-v2zrn [989.424041ms] Jul 1 12:46:09.249: INFO: Created: latency-svc-jxfmn Jul 1 12:46:09.315: INFO: Got endpoints: latency-svc-jxfmn [1.064382601s] Jul 1 12:46:09.351: INFO: Created: latency-svc-hzhqr Jul 1 12:46:09.365: INFO: Got endpoints: latency-svc-hzhqr [1.071867426s] Jul 1 12:46:09.393: INFO: Created: latency-svc-c89xc Jul 1 12:46:09.407: INFO: Got endpoints: latency-svc-c89xc [974.763629ms] Jul 1 12:46:09.468: INFO: Created: latency-svc-j4wgp Jul 1 12:46:09.470: INFO: Got endpoints: latency-svc-j4wgp [1.025097269s] Jul 1 12:46:09.507: INFO: Created: latency-svc-62zlw Jul 1 12:46:09.520: INFO: Got endpoints: latency-svc-62zlw [1.044990251s] Jul 1 12:46:09.555: INFO: Created: latency-svc-4frmx Jul 1 12:46:09.602: INFO: Got endpoints: latency-svc-4frmx [1.073213104s] Jul 1 12:46:09.668: INFO: Created: latency-svc-qc89n Jul 1 12:46:09.683: INFO: Got endpoints: latency-svc-qc89n [1.087427212s] Jul 1 12:46:09.729: INFO: Created: latency-svc-mm2m5 Jul 1 12:46:09.752: INFO: Created: latency-svc-bzfmx Jul 1 12:46:09.753: INFO: Got endpoints: latency-svc-mm2m5 [1.038785441s] Jul 1 12:46:09.767: INFO: Got endpoints: latency-svc-bzfmx [973.721819ms] Jul 1 12:46:09.819: INFO: Created: latency-svc-7m8xt Jul 1 12:46:09.884: INFO: Got endpoints: latency-svc-7m8xt [1.015043842s] Jul 1 12:46:09.887: INFO: Created: latency-svc-rklhn Jul 1 12:46:09.907: INFO: Got endpoints: latency-svc-rklhn [952.623843ms] Jul 1 12:46:09.944: INFO: Created: latency-svc-bb569 Jul 1 12:46:09.960: INFO: Got endpoints: latency-svc-bb569 [945.079276ms] Jul 1 12:46:10.057: INFO: Created: latency-svc-rs77r Jul 1 12:46:10.068: INFO: Got endpoints: latency-svc-rs77r [999.844482ms] Jul 1 12:46:10.089: INFO: Created: latency-svc-4zg5d Jul 1 12:46:10.098: INFO: Got endpoints: latency-svc-4zg5d [892.103848ms] Jul 1 12:46:10.125: INFO: Created: latency-svc-pm66j Jul 1 12:46:10.219: INFO: Got endpoints: latency-svc-pm66j [992.431741ms] Jul 1 12:46:10.221: INFO: Created: latency-svc-trj6m Jul 1 12:46:10.483: INFO: Got endpoints: latency-svc-trj6m [1.167701874s] Jul 1 12:46:10.723: INFO: Created: latency-svc-tc5hk Jul 1 12:46:10.759: INFO: Got endpoints: latency-svc-tc5hk [1.393534922s] Jul 1 12:46:10.814: INFO: Created: latency-svc-l96sv Jul 1 12:46:10.872: INFO: Got endpoints: latency-svc-l96sv [1.464907873s] Jul 1 12:46:10.905: INFO: Created: latency-svc-lb4qz Jul 1 12:46:10.921: INFO: Got endpoints: latency-svc-lb4qz [1.451251489s] Jul 1 12:46:10.940: INFO: Created: latency-svc-wtczw Jul 1 12:46:10.957: INFO: Got endpoints: latency-svc-wtczw [1.437171277s] Jul 1 12:46:11.179: INFO: Created: latency-svc-w68nh Jul 1 12:46:11.357: INFO: Got endpoints: latency-svc-w68nh [1.754369098s] Jul 1 12:46:11.366: INFO: Created: latency-svc-jptmq Jul 1 12:46:11.371: INFO: Got endpoints: latency-svc-jptmq [1.688621607s] Jul 1 12:46:11.408: INFO: Created: latency-svc-zkrs4 Jul 1 12:46:11.426: INFO: Got endpoints: latency-svc-zkrs4 [1.672946147s] Jul 1 12:46:11.426: INFO: Created: latency-svc-4l7ml Jul 1 12:46:11.444: INFO: Got endpoints: latency-svc-4l7ml [1.676206515s] Jul 1 12:46:11.501: INFO: Created: latency-svc-28qdz Jul 1 12:46:11.503: INFO: Got endpoints: latency-svc-28qdz [1.61973734s] Jul 1 12:46:11.547: INFO: Created: latency-svc-4dpjl Jul 1 12:46:11.564: INFO: Got endpoints: latency-svc-4dpjl [1.656568684s] Jul 1 12:46:11.683: INFO: Created: latency-svc-hsmxx Jul 1 12:46:11.697: INFO: Got endpoints: latency-svc-hsmxx [1.737100334s] Jul 1 12:46:11.739: INFO: Created: latency-svc-lb785 Jul 1 12:46:11.751: INFO: Got endpoints: latency-svc-lb785 [1.682517774s] Jul 1 12:46:11.775: INFO: Created: latency-svc-lhs6v Jul 1 12:46:11.842: INFO: Got endpoints: latency-svc-lhs6v [1.743541538s] Jul 1 12:46:11.844: INFO: Created: latency-svc-jwhjl Jul 1 12:46:11.860: INFO: Got endpoints: latency-svc-jwhjl [1.64047927s] Jul 1 12:46:11.883: INFO: Created: latency-svc-9p7j2 Jul 1 12:46:11.896: INFO: Got endpoints: latency-svc-9p7j2 [1.413192814s] Jul 1 12:46:11.919: INFO: Created: latency-svc-fngrk Jul 1 12:46:11.932: INFO: Got endpoints: latency-svc-fngrk [1.173457138s] Jul 1 12:46:11.986: INFO: Created: latency-svc-vfhd5 Jul 1 12:46:12.009: INFO: Got endpoints: latency-svc-vfhd5 [1.137247907s] Jul 1 12:46:12.009: INFO: Created: latency-svc-xzvgw Jul 1 12:46:12.023: INFO: Got endpoints: latency-svc-xzvgw [1.101782699s] Jul 1 12:46:12.057: INFO: Created: latency-svc-qzm5k Jul 1 12:46:12.183: INFO: Got endpoints: latency-svc-qzm5k [1.225803693s] Jul 1 12:46:12.186: INFO: Created: latency-svc-bsmqq Jul 1 12:46:12.221: INFO: Got endpoints: latency-svc-bsmqq [864.775192ms] Jul 1 12:46:12.261: INFO: Created: latency-svc-fmb2b Jul 1 12:46:12.339: INFO: Got endpoints: latency-svc-fmb2b [967.401199ms] Jul 1 12:46:12.341: INFO: Created: latency-svc-nblxg Jul 1 12:46:12.347: INFO: Got endpoints: latency-svc-nblxg [921.161241ms] Jul 1 12:46:12.369: INFO: Created: latency-svc-bcf7k Jul 1 12:46:12.377: INFO: Got endpoints: latency-svc-bcf7k [933.622742ms] Jul 1 12:46:12.398: INFO: Created: latency-svc-v6szx Jul 1 12:46:12.408: INFO: Got endpoints: latency-svc-v6szx [904.490186ms] Jul 1 12:46:12.428: INFO: Created: latency-svc-p9qrk Jul 1 12:46:12.488: INFO: Got endpoints: latency-svc-p9qrk [923.983314ms] Jul 1 12:46:12.524: INFO: Created: latency-svc-srrnx Jul 1 12:46:12.541: INFO: Got endpoints: latency-svc-srrnx [843.676822ms] Jul 1 12:46:12.680: INFO: Created: latency-svc-7vbqd Jul 1 12:46:12.683: INFO: Got endpoints: latency-svc-7vbqd [932.385504ms] Jul 1 12:46:12.728: INFO: Created: latency-svc-lz94b Jul 1 12:46:12.748: INFO: Got endpoints: latency-svc-lz94b [905.699883ms] Jul 1 12:46:12.764: INFO: Created: latency-svc-phsmj Jul 1 12:46:12.854: INFO: Got endpoints: latency-svc-phsmj [993.994642ms] Jul 1 12:46:12.896: INFO: Created: latency-svc-rcdfc Jul 1 12:46:12.910: INFO: Got endpoints: latency-svc-rcdfc [1.013646908s] Jul 1 12:46:12.938: INFO: Created: latency-svc-rlg59 Jul 1 12:46:12.952: INFO: Got endpoints: latency-svc-rlg59 [1.019460566s] Jul 1 12:46:13.010: INFO: Created: latency-svc-9wrdl Jul 1 12:46:13.018: INFO: Got endpoints: latency-svc-9wrdl [1.008837215s] Jul 1 12:46:13.053: INFO: Created: latency-svc-9n6h8 Jul 1 12:46:13.067: INFO: Got endpoints: latency-svc-9n6h8 [1.043891772s] Jul 1 12:46:13.101: INFO: Created: latency-svc-7xkmw Jul 1 12:46:13.183: INFO: Got endpoints: latency-svc-7xkmw [999.925156ms] Jul 1 12:46:13.187: INFO: Created: latency-svc-bm75r Jul 1 12:46:13.211: INFO: Got endpoints: latency-svc-bm75r [989.544694ms] Jul 1 12:46:13.232: INFO: Created: latency-svc-fgg2j Jul 1 12:46:13.257: INFO: Got endpoints: latency-svc-fgg2j [918.038177ms] Jul 1 12:46:13.351: INFO: Created: latency-svc-fngzq Jul 1 12:46:13.354: INFO: Got endpoints: latency-svc-fngzq [1.006659909s] Jul 1 12:46:13.389: INFO: Created: latency-svc-z5lhv Jul 1 12:46:13.404: INFO: Got endpoints: latency-svc-z5lhv [1.026422644s] Jul 1 12:46:13.436: INFO: Created: latency-svc-gvqsg Jul 1 12:46:13.507: INFO: Got endpoints: latency-svc-gvqsg [1.099404242s] Jul 1 12:46:13.670: INFO: Created: latency-svc-dx2j9 Jul 1 12:46:13.686: INFO: Got endpoints: latency-svc-dx2j9 [1.197780528s] Jul 1 12:46:13.724: INFO: Created: latency-svc-lfb2t Jul 1 12:46:13.741: INFO: Got endpoints: latency-svc-lfb2t [1.199923788s] Jul 1 12:46:13.807: INFO: Created: latency-svc-lkt25 Jul 1 12:46:13.813: INFO: Got endpoints: latency-svc-lkt25 [1.129456252s] Jul 1 12:46:13.838: INFO: Created: latency-svc-6vgxd Jul 1 12:46:13.855: INFO: Got endpoints: latency-svc-6vgxd [1.106928106s] Jul 1 12:46:13.881: INFO: Created: latency-svc-f7hb6 Jul 1 12:46:13.897: INFO: Got endpoints: latency-svc-f7hb6 [1.043484803s] Jul 1 12:46:13.944: INFO: Created: latency-svc-26svq Jul 1 12:46:13.947: INFO: Got endpoints: latency-svc-26svq [1.037521614s] Jul 1 12:46:13.975: INFO: Created: latency-svc-c6q68 Jul 1 12:46:13.988: INFO: Got endpoints: latency-svc-c6q68 [1.035845817s] Jul 1 12:46:14.013: INFO: Created: latency-svc-8rb7x Jul 1 12:46:14.024: INFO: Got endpoints: latency-svc-8rb7x [1.00591706s] Jul 1 12:46:14.081: INFO: Created: latency-svc-t74rb Jul 1 12:46:14.103: INFO: Got endpoints: latency-svc-t74rb [114.868147ms] Jul 1 12:46:14.133: INFO: Created: latency-svc-qbk98 Jul 1 12:46:14.145: INFO: Got endpoints: latency-svc-qbk98 [1.078274315s] Jul 1 12:46:14.226: INFO: Created: latency-svc-fj7tz Jul 1 12:46:14.228: INFO: Got endpoints: latency-svc-fj7tz [1.044808246s] Jul 1 12:46:14.252: INFO: Created: latency-svc-q9l4t Jul 1 12:46:14.265: INFO: Got endpoints: latency-svc-q9l4t [1.054101196s] Jul 1 12:46:14.288: INFO: Created: latency-svc-h7l4m Jul 1 12:46:14.302: INFO: Got endpoints: latency-svc-h7l4m [1.044555297s] Jul 1 12:46:14.369: INFO: Created: latency-svc-79dr6 Jul 1 12:46:14.396: INFO: Got endpoints: latency-svc-79dr6 [1.042189384s] Jul 1 12:46:14.396: INFO: Created: latency-svc-ddv49 Jul 1 12:46:14.420: INFO: Got endpoints: latency-svc-ddv49 [1.016067615s] Jul 1 12:46:14.450: INFO: Created: latency-svc-kqntn Jul 1 12:46:14.464: INFO: Got endpoints: latency-svc-kqntn [956.433851ms] Jul 1 12:46:14.519: INFO: Created: latency-svc-d2bcn Jul 1 12:46:14.524: INFO: Got endpoints: latency-svc-d2bcn [837.856624ms] Jul 1 12:46:14.545: INFO: Created: latency-svc-r2sk9 Jul 1 12:46:14.588: INFO: Got endpoints: latency-svc-r2sk9 [846.877237ms] Jul 1 12:46:14.662: INFO: Created: latency-svc-nb697 Jul 1 12:46:14.669: INFO: Got endpoints: latency-svc-nb697 [855.868423ms] Jul 1 12:46:14.720: INFO: Created: latency-svc-ldtt5 Jul 1 12:46:14.735: INFO: Got endpoints: latency-svc-ldtt5 [880.515164ms] Jul 1 12:46:14.755: INFO: Created: latency-svc-8bjcw Jul 1 12:46:14.812: INFO: Got endpoints: latency-svc-8bjcw [914.545082ms] Jul 1 12:46:14.814: INFO: Created: latency-svc-jmrtn Jul 1 12:46:14.826: INFO: Got endpoints: latency-svc-jmrtn [878.456262ms] Jul 1 12:46:14.889: INFO: Created: latency-svc-v9kmz Jul 1 12:46:14.911: INFO: Got endpoints: latency-svc-v9kmz [886.560488ms] Jul 1 12:46:14.956: INFO: Created: latency-svc-9cwjd Jul 1 12:46:14.964: INFO: Got endpoints: latency-svc-9cwjd [861.729ms] Jul 1 12:46:15.007: INFO: Created: latency-svc-lwbzb Jul 1 12:46:15.024: INFO: Got endpoints: latency-svc-lwbzb [879.426989ms] Jul 1 12:46:15.111: INFO: Created: latency-svc-6fwg2 Jul 1 12:46:15.114: INFO: Got endpoints: latency-svc-6fwg2 [885.975837ms] Jul 1 12:46:15.164: INFO: Created: latency-svc-5lb6l Jul 1 12:46:15.181: INFO: Got endpoints: latency-svc-5lb6l [915.776359ms] Jul 1 12:46:15.262: INFO: Created: latency-svc-75lct Jul 1 12:46:15.289: INFO: Got endpoints: latency-svc-75lct [987.39038ms] Jul 1 12:46:15.291: INFO: Created: latency-svc-q6d7h Jul 1 12:46:15.301: INFO: Got endpoints: latency-svc-q6d7h [905.453548ms] Jul 1 12:46:15.332: INFO: Created: latency-svc-zjq5q Jul 1 12:46:15.344: INFO: Got endpoints: latency-svc-zjq5q [923.93743ms] Jul 1 12:46:15.399: INFO: Created: latency-svc-bxp2f Jul 1 12:46:15.402: INFO: Got endpoints: latency-svc-bxp2f [938.223453ms] Jul 1 12:46:15.428: INFO: Created: latency-svc-dhth5 Jul 1 12:46:15.440: INFO: Got endpoints: latency-svc-dhth5 [916.267744ms] Jul 1 12:46:15.469: INFO: Created: latency-svc-fgxv2 Jul 1 12:46:15.555: INFO: Got endpoints: latency-svc-fgxv2 [966.712472ms] Jul 1 12:46:15.556: INFO: Created: latency-svc-cqbr7 Jul 1 12:46:15.567: INFO: Got endpoints: latency-svc-cqbr7 [898.09499ms] Jul 1 12:46:15.626: INFO: Created: latency-svc-qskl5 Jul 1 12:46:15.639: INFO: Got endpoints: latency-svc-qskl5 [904.06047ms] Jul 1 12:46:15.704: INFO: Created: latency-svc-7m4vx Jul 1 12:46:15.712: INFO: Got endpoints: latency-svc-7m4vx [899.89215ms] Jul 1 12:46:15.733: INFO: Created: latency-svc-856tn Jul 1 12:46:15.742: INFO: Got endpoints: latency-svc-856tn [916.229045ms] Jul 1 12:46:15.775: INFO: Created: latency-svc-mjjwx Jul 1 12:46:15.790: INFO: Got endpoints: latency-svc-mjjwx [879.76945ms] Jul 1 12:46:15.848: INFO: Created: latency-svc-hgh4f Jul 1 12:46:15.872: INFO: Got endpoints: latency-svc-hgh4f [907.077052ms] Jul 1 12:46:15.901: INFO: Created: latency-svc-9sm7t Jul 1 12:46:15.932: INFO: Got endpoints: latency-svc-9sm7t [907.61914ms] Jul 1 12:46:15.992: INFO: Created: latency-svc-f2mfx Jul 1 12:46:16.021: INFO: Got endpoints: latency-svc-f2mfx [907.146091ms] Jul 1 12:46:16.023: INFO: Created: latency-svc-9brlr Jul 1 12:46:16.046: INFO: Got endpoints: latency-svc-9brlr [864.690583ms] Jul 1 12:46:16.075: INFO: Created: latency-svc-qltdr Jul 1 12:46:16.088: INFO: Got endpoints: latency-svc-qltdr [799.200424ms] Jul 1 12:46:16.136: INFO: Created: latency-svc-s8vgd Jul 1 12:46:16.138: INFO: Got endpoints: latency-svc-s8vgd [836.166372ms] Jul 1 12:46:16.178: INFO: Created: latency-svc-4wjss Jul 1 12:46:16.197: INFO: Got endpoints: latency-svc-4wjss [852.583339ms] Jul 1 12:46:16.282: INFO: Created: latency-svc-tcr52 Jul 1 12:46:16.284: INFO: Got endpoints: latency-svc-tcr52 [882.061107ms] Jul 1 12:46:16.322: INFO: Created: latency-svc-nqthk Jul 1 12:46:16.335: INFO: Got endpoints: latency-svc-nqthk [895.181525ms] Jul 1 12:46:16.358: INFO: Created: latency-svc-ngpb6 Jul 1 12:46:16.372: INFO: Got endpoints: latency-svc-ngpb6 [817.262578ms] Jul 1 12:46:16.418: INFO: Created: latency-svc-g4cdv Jul 1 12:46:16.420: INFO: Got endpoints: latency-svc-g4cdv [853.186754ms] Jul 1 12:46:16.447: INFO: Created: latency-svc-m5mfh Jul 1 12:46:16.462: INFO: Got endpoints: latency-svc-m5mfh [823.092455ms] Jul 1 12:46:16.495: INFO: Created: latency-svc-6lv8s Jul 1 12:46:16.511: INFO: Got endpoints: latency-svc-6lv8s [799.004097ms] Jul 1 12:46:16.566: INFO: Created: latency-svc-78nhn Jul 1 12:46:16.598: INFO: Got endpoints: latency-svc-78nhn [855.457325ms] Jul 1 12:46:16.657: INFO: Created: latency-svc-4w9vv Jul 1 12:46:16.711: INFO: Got endpoints: latency-svc-4w9vv [920.204431ms] Jul 1 12:46:16.716: INFO: Created: latency-svc-cb56j Jul 1 12:46:16.752: INFO: Got endpoints: latency-svc-cb56j [879.996949ms] Jul 1 12:46:16.793: INFO: Created: latency-svc-bzw6l Jul 1 12:46:16.866: INFO: Got endpoints: latency-svc-bzw6l [934.119772ms] Jul 1 12:46:16.868: INFO: Created: latency-svc-dbz82 Jul 1 12:46:16.878: INFO: Got endpoints: latency-svc-dbz82 [856.133429ms] Jul 1 12:46:16.934: INFO: Created: latency-svc-whn7c Jul 1 12:46:16.962: INFO: Got endpoints: latency-svc-whn7c [915.791383ms] Jul 1 12:46:17.015: INFO: Created: latency-svc-m85r9 Jul 1 12:46:17.036: INFO: Got endpoints: latency-svc-m85r9 [947.220513ms] Jul 1 12:46:17.083: INFO: Created: latency-svc-xfr7x Jul 1 12:46:17.112: INFO: Got endpoints: latency-svc-xfr7x [974.353351ms] Jul 1 12:46:17.162: INFO: Created: latency-svc-bgvj6 Jul 1 12:46:17.181: INFO: Got endpoints: latency-svc-bgvj6 [984.256243ms] Jul 1 12:46:17.221: INFO: Created: latency-svc-x9m45 Jul 1 12:46:17.239: INFO: Got endpoints: latency-svc-x9m45 [954.298728ms] Jul 1 12:46:17.279: INFO: Created: latency-svc-tb7f7 Jul 1 12:46:17.282: INFO: Got endpoints: latency-svc-tb7f7 [946.346452ms] Jul 1 12:46:17.317: INFO: Created: latency-svc-kjtw4 Jul 1 12:46:17.329: INFO: Got endpoints: latency-svc-kjtw4 [957.217769ms] Jul 1 12:46:17.367: INFO: Created: latency-svc-qzbcm Jul 1 12:46:17.377: INFO: Got endpoints: latency-svc-qzbcm [956.912971ms] Jul 1 12:46:17.423: INFO: Created: latency-svc-q7b4h Jul 1 12:46:17.426: INFO: Got endpoints: latency-svc-q7b4h [963.324055ms] Jul 1 12:46:17.456: INFO: Created: latency-svc-tbpth Jul 1 12:46:17.474: INFO: Got endpoints: latency-svc-tbpth [963.622602ms] Jul 1 12:46:17.503: INFO: Created: latency-svc-ccjv7 Jul 1 12:46:17.603: INFO: Got endpoints: latency-svc-ccjv7 [1.005116301s] Jul 1 12:46:17.636: INFO: Created: latency-svc-48zxx Jul 1 12:46:17.648: INFO: Got endpoints: latency-svc-48zxx [937.687964ms] Jul 1 12:46:17.695: INFO: Created: latency-svc-d6rtl Jul 1 12:46:17.782: INFO: Got endpoints: latency-svc-d6rtl [1.030181834s] Jul 1 12:46:17.803: INFO: Created: latency-svc-8ctw6 Jul 1 12:46:17.926: INFO: Got endpoints: latency-svc-8ctw6 [1.059274897s] Jul 1 12:46:17.930: INFO: Created: latency-svc-27t6q Jul 1 12:46:17.938: INFO: Got endpoints: latency-svc-27t6q [1.060291602s] Jul 1 12:46:17.965: INFO: Created: latency-svc-l4z8r Jul 1 12:46:18.016: INFO: Got endpoints: latency-svc-l4z8r [1.054393354s] Jul 1 12:46:18.055: INFO: Created: latency-svc-td49p Jul 1 12:46:18.085: INFO: Got endpoints: latency-svc-td49p [1.049503406s] Jul 1 12:46:18.116: INFO: Created: latency-svc-zcn68 Jul 1 12:46:18.129: INFO: Got endpoints: latency-svc-zcn68 [1.017346896s] Jul 1 12:46:18.190: INFO: Created: latency-svc-p28xr Jul 1 12:46:18.260: INFO: Got endpoints: latency-svc-p28xr [1.079207393s] Jul 1 12:46:18.261: INFO: Created: latency-svc-mlxhz Jul 1 12:46:18.284: INFO: Got endpoints: latency-svc-mlxhz [1.045458461s] Jul 1 12:46:18.412: INFO: Created: latency-svc-vlkgs Jul 1 12:46:18.414: INFO: Got endpoints: latency-svc-vlkgs [1.131841594s] Jul 1 12:46:18.439: INFO: Created: latency-svc-bhfwm Jul 1 12:46:18.455: INFO: Got endpoints: latency-svc-bhfwm [1.125411663s] Jul 1 12:46:18.475: INFO: Created: latency-svc-fnkmz Jul 1 12:46:18.491: INFO: Got endpoints: latency-svc-fnkmz [1.113357696s] Jul 1 12:46:18.549: INFO: Created: latency-svc-ptpdq Jul 1 12:46:18.552: INFO: Got endpoints: latency-svc-ptpdq [1.126373365s] Jul 1 12:46:18.578: INFO: Created: latency-svc-k99zt Jul 1 12:46:18.594: INFO: Got endpoints: latency-svc-k99zt [1.119544981s] Jul 1 12:46:18.626: INFO: Created: latency-svc-57rfk Jul 1 12:46:18.636: INFO: Got endpoints: latency-svc-57rfk [1.033148387s] Jul 1 12:46:18.674: INFO: Created: latency-svc-v9rh7 Jul 1 12:46:18.690: INFO: Got endpoints: latency-svc-v9rh7 [1.041624918s] Jul 1 12:46:18.721: INFO: Created: latency-svc-fqjpc Jul 1 12:46:18.738: INFO: Got endpoints: latency-svc-fqjpc [956.462071ms] Jul 1 12:46:18.757: INFO: Created: latency-svc-fr8x7 Jul 1 12:46:18.806: INFO: Got endpoints: latency-svc-fr8x7 [880.039842ms] Jul 1 12:46:18.811: INFO: Created: latency-svc-r2nv5 Jul 1 12:46:18.841: INFO: Got endpoints: latency-svc-r2nv5 [902.821325ms] Jul 1 12:46:18.877: INFO: Created: latency-svc-hqnf6 Jul 1 12:46:18.944: INFO: Got endpoints: latency-svc-hqnf6 [927.476919ms] Jul 1 12:46:18.949: INFO: Created: latency-svc-jjknd Jul 1 12:46:18.955: INFO: Got endpoints: latency-svc-jjknd [869.614749ms] Jul 1 12:46:18.980: INFO: Created: latency-svc-tswsz Jul 1 12:46:18.989: INFO: Got endpoints: latency-svc-tswsz [859.470904ms] Jul 1 12:46:19.010: INFO: Created: latency-svc-cvcv5 Jul 1 12:46:19.019: INFO: Got endpoints: latency-svc-cvcv5 [758.588473ms] Jul 1 12:46:19.089: INFO: Created: latency-svc-lspqs Jul 1 12:46:19.094: INFO: Got endpoints: latency-svc-lspqs [810.174398ms] Jul 1 12:46:19.147: INFO: Created: latency-svc-4hljd Jul 1 12:46:19.168: INFO: Got endpoints: latency-svc-4hljd [754.362865ms] Jul 1 12:46:19.183: INFO: Created: latency-svc-kpprn Jul 1 12:46:19.249: INFO: Got endpoints: latency-svc-kpprn [794.205278ms] Jul 1 12:46:19.274: INFO: Created: latency-svc-hn9qx Jul 1 12:46:19.290: INFO: Got endpoints: latency-svc-hn9qx [799.788235ms] Jul 1 12:46:19.321: INFO: Created: latency-svc-prkv5 Jul 1 12:46:19.332: INFO: Got endpoints: latency-svc-prkv5 [779.751318ms] Jul 1 12:46:19.399: INFO: Created: latency-svc-jh76b Jul 1 12:46:19.402: INFO: Got endpoints: latency-svc-jh76b [807.942963ms] Jul 1 12:46:19.436: INFO: Created: latency-svc-glb2n Jul 1 12:46:19.453: INFO: Got endpoints: latency-svc-glb2n [817.286627ms] Jul 1 12:46:19.478: INFO: Created: latency-svc-mjqrr Jul 1 12:46:19.495: INFO: Got endpoints: latency-svc-mjqrr [805.245657ms] Jul 1 12:46:19.543: INFO: Created: latency-svc-z6mqg Jul 1 12:46:19.550: INFO: Got endpoints: latency-svc-z6mqg [811.889988ms] Jul 1 12:46:19.566: INFO: Created: latency-svc-fwpr4 Jul 1 12:46:19.580: INFO: Got endpoints: latency-svc-fwpr4 [774.056549ms] Jul 1 12:46:19.603: INFO: Created: latency-svc-w46tm Jul 1 12:46:19.623: INFO: Got endpoints: latency-svc-w46tm [781.885278ms] Jul 1 12:46:19.692: INFO: Created: latency-svc-mfw5h Jul 1 12:46:19.701: INFO: Got endpoints: latency-svc-mfw5h [757.11569ms] Jul 1 12:46:19.718: INFO: Created: latency-svc-nxv89 Jul 1 12:46:19.731: INFO: Got endpoints: latency-svc-nxv89 [776.080065ms] Jul 1 12:46:19.760: INFO: Created: latency-svc-s5tg7 Jul 1 12:46:19.773: INFO: Got endpoints: latency-svc-s5tg7 [784.02645ms] Jul 1 12:46:19.790: INFO: Created: latency-svc-tsffr Jul 1 12:46:19.859: INFO: Got endpoints: latency-svc-tsffr [840.522429ms] Jul 1 12:46:19.861: INFO: Created: latency-svc-s6qw2 Jul 1 12:46:19.876: INFO: Got endpoints: latency-svc-s6qw2 [781.441273ms] Jul 1 12:46:19.915: INFO: Created: latency-svc-sh6hq Jul 1 12:46:19.924: INFO: Got endpoints: latency-svc-sh6hq [755.666469ms] Jul 1 12:46:19.999: INFO: Created: latency-svc-sz686 Jul 1 12:46:20.002: INFO: Got endpoints: latency-svc-sz686 [752.727122ms] Jul 1 12:46:20.030: INFO: Created: latency-svc-5xcjk Jul 1 12:46:20.047: INFO: Got endpoints: latency-svc-5xcjk [756.37851ms] Jul 1 12:46:20.070: INFO: Created: latency-svc-ksjf8 Jul 1 12:46:20.087: INFO: Got endpoints: latency-svc-ksjf8 [754.453273ms] Jul 1 12:46:20.155: INFO: Created: latency-svc-jgvdq Jul 1 12:46:20.165: INFO: Got endpoints: latency-svc-jgvdq [763.104467ms] Jul 1 12:46:20.165: INFO: Latencies: [106.809956ms 114.868147ms 143.328503ms 229.800488ms 238.070946ms 403.802863ms 419.100136ms 496.934497ms 557.820584ms 612.751862ms 714.881841ms 749.399958ms 752.727122ms 754.362865ms 754.453273ms 755.666469ms 756.37851ms 757.11569ms 758.588473ms 763.104467ms 774.056549ms 776.080065ms 779.751318ms 781.441273ms 781.885278ms 784.02645ms 785.398302ms 794.205278ms 799.004097ms 799.200424ms 799.788235ms 805.245657ms 807.942963ms 810.174398ms 811.889988ms 817.262578ms 817.286627ms 823.092455ms 836.166372ms 837.856624ms 840.522429ms 843.676822ms 846.877237ms 852.583339ms 853.186754ms 855.457325ms 855.868423ms 856.133429ms 859.470904ms 861.729ms 864.690583ms 864.775192ms 869.614749ms 878.456262ms 879.426989ms 879.76945ms 879.996949ms 880.039842ms 880.515164ms 882.061107ms 885.975837ms 886.560488ms 892.103848ms 895.181525ms 898.09499ms 899.89215ms 902.821325ms 904.06047ms 904.490186ms 905.453548ms 905.699883ms 907.077052ms 907.146091ms 907.61914ms 914.545082ms 915.776359ms 915.791383ms 916.229045ms 916.267744ms 918.038177ms 920.204431ms 921.161241ms 923.93743ms 923.983314ms 927.476919ms 932.385504ms 933.622742ms 934.119772ms 937.562959ms 937.687964ms 938.223453ms 945.079276ms 946.346452ms 947.220513ms 952.623843ms 954.298728ms 956.433851ms 956.462071ms 956.912971ms 957.217769ms 963.324055ms 963.622602ms 966.712472ms 967.401199ms 973.721819ms 974.353351ms 974.763629ms 984.256243ms 987.39038ms 989.424041ms 989.544694ms 992.431741ms 993.994642ms 999.844482ms 999.925156ms 1.005116301s 1.00591706s 1.006659909s 1.007580935s 1.008837215s 1.013646908s 1.015043842s 1.016067615s 1.017346896s 1.019460566s 1.025097269s 1.026422644s 1.030181834s 1.033148387s 1.035845817s 1.037521614s 1.038785441s 1.041624918s 1.042189384s 1.043484803s 1.043891772s 1.044555297s 1.044808246s 1.044990251s 1.045458461s 1.045650796s 1.049503406s 1.054101196s 1.054393354s 1.059274897s 1.059765823s 1.060291602s 1.063918418s 1.064382601s 1.071867426s 1.073213104s 1.078274315s 1.079207393s 1.085606397s 1.087427212s 1.089908482s 1.095627537s 1.096870132s 1.099404242s 1.101782699s 1.103292756s 1.105496566s 1.106928106s 1.110676237s 1.112899102s 1.112954715s 1.113357696s 1.119544981s 1.123259019s 1.12428138s 1.125411663s 1.126373365s 1.129456252s 1.131841594s 1.136777485s 1.137247907s 1.14551547s 1.167701874s 1.173457138s 1.178418239s 1.184662918s 1.197780528s 1.199923788s 1.225803693s 1.249474527s 1.393534922s 1.413192814s 1.437171277s 1.451251489s 1.464907873s 1.61973734s 1.64047927s 1.656568684s 1.672946147s 1.676206515s 1.682517774s 1.688621607s 1.737100334s 1.743541538s 1.754369098s] Jul 1 12:46:20.166: INFO: 50 %ile: 963.324055ms Jul 1 12:46:20.166: INFO: 90 %ile: 1.184662918s Jul 1 12:46:20.166: INFO: 99 %ile: 1.743541538s Jul 1 12:46:20.166: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:46:20.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8316" for this suite. • [SLOW TEST:16.729 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":65,"skipped":1011,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:46:20.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:46:24.393: INFO: Waiting up to 5m0s for pod "client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89" in namespace "pods-8018" to be "success or failure" Jul 1 12:46:24.396: INFO: Pod "client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325714ms Jul 1 12:46:26.401: INFO: Pod "client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007497891s Jul 1 12:46:28.413: INFO: Pod "client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019839028s Jul 1 12:46:30.425: INFO: Pod "client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031983594s STEP: Saw pod success Jul 1 12:46:30.425: INFO: Pod "client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89" satisfied condition "success or failure" Jul 1 12:46:30.453: INFO: Trying to get logs from node jerma-worker pod client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89 container env3cont: STEP: delete the pod Jul 1 12:46:30.585: INFO: Waiting for pod client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89 to disappear Jul 1 12:46:30.593: INFO: Pod client-envvars-84ef2cac-2e5c-4166-a8bd-0fc9075efe89 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:46:30.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8018" for this suite. • [SLOW TEST:10.434 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1019,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:46:30.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0701 12:46:32.000522 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 12:46:32.000: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:46:32.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-155" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":67,"skipped":1029,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:46:32.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 1 12:46:39.657: INFO: 10 pods remaining Jul 1 12:46:39.657: INFO: 7 pods has nil DeletionTimestamp Jul 1 12:46:39.657: INFO: Jul 1 12:46:41.330: INFO: 6 pods remaining Jul 1 12:46:41.331: INFO: 0 pods has nil DeletionTimestamp Jul 1 12:46:41.331: INFO: Jul 1 12:46:42.927: INFO: 0 pods remaining Jul 1 12:46:42.927: INFO: 0 pods has nil DeletionTimestamp Jul 1 12:46:42.927: INFO: STEP: Gathering metrics W0701 12:46:45.737563 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 12:46:45.737: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:46:45.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4434" for this suite. • [SLOW TEST:14.448 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":68,"skipped":1036,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:46:46.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jul 1 12:46:47.452: INFO: Waiting up to 5m0s for pod "client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c" in namespace "containers-3863" to be "success or failure" Jul 1 12:46:48.102: INFO: Pod "client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 650.422775ms Jul 1 12:46:50.552: INFO: Pod "client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.100041717s Jul 1 12:46:52.663: INFO: Pod "client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.21072124s Jul 1 12:46:54.675: INFO: Pod "client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.222712865s STEP: Saw pod success Jul 1 12:46:54.675: INFO: Pod "client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c" satisfied condition "success or failure" Jul 1 12:46:54.717: INFO: Trying to get logs from node jerma-worker pod client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c container test-container: STEP: delete the pod Jul 1 12:46:54.963: INFO: Waiting for pod client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c to disappear Jul 1 12:46:55.028: INFO: Pod client-containers-2f4c632b-3814-41b0-807f-059eb4ee1b9c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:46:55.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3863" for this suite. • [SLOW TEST:8.551 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1038,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:46:55.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-b2077681-bdb6-48bb-a19b-ec4f541812fb [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:46:55.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1771" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":70,"skipped":1047,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:46:55.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:47:14.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1736" for this suite. • [SLOW TEST:19.399 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":71,"skipped":1066,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:47:14.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 12:47:20.842: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:47:20.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-527" for this suite. • [SLOW TEST:6.352 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1066,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:47:21.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 1 12:47:33.459: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:33.533: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:35.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:36.037: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:37.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:37.537: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:39.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:39.541: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:41.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:41.537: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:43.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:43.537: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:45.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:45.538: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:47.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:47.536: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 12:47:49.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 12:47:49.538: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:47:49.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9714" for this suite. • [SLOW TEST:28.515 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1082,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:47:49.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jul 1 12:47:49.647: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jul 1 12:47:49.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-790' Jul 1 12:47:50.064: INFO: stderr: "" Jul 1 12:47:50.064: INFO: stdout: "service/agnhost-slave created\n" Jul 1 12:47:50.065: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jul 1 12:47:50.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-790' Jul 1 12:47:50.383: INFO: stderr: "" Jul 1 12:47:50.383: INFO: stdout: "service/agnhost-master created\n" Jul 1 12:47:50.383: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 1 12:47:50.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-790' Jul 1 12:47:50.687: INFO: stderr: "" Jul 1 12:47:50.687: INFO: stdout: "service/frontend created\n" Jul 1 12:47:50.687: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jul 1 12:47:50.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-790' Jul 1 12:47:51.922: INFO: stderr: "" Jul 1 12:47:51.922: INFO: stdout: "deployment.apps/frontend created\n" Jul 1 12:47:51.922: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 1 12:47:51.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-790' Jul 1 12:47:52.870: INFO: stderr: "" Jul 1 12:47:52.870: INFO: stdout: "deployment.apps/agnhost-master created\n" Jul 1 12:47:52.870: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 1 12:47:52.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-790' Jul 1 12:47:53.273: INFO: stderr: "" Jul 1 12:47:53.273: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jul 1 12:47:53.273: INFO: Waiting for all frontend pods to be Running. Jul 1 12:48:03.324: INFO: Waiting for frontend to serve content. Jul 1 12:48:04.523: INFO: Trying to add a new entry to the guestbook. Jul 1 12:48:04.548: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 1 12:48:04.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-790' Jul 1 12:48:04.861: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 12:48:04.861: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jul 1 12:48:04.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-790' Jul 1 12:48:05.093: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 12:48:05.093: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 12:48:05.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-790' Jul 1 12:48:05.338: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 12:48:05.338: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 12:48:05.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-790' Jul 1 12:48:05.485: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 12:48:05.485: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 12:48:05.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-790' Jul 1 12:48:05.636: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 12:48:05.636: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 12:48:05.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-790' Jul 1 12:48:05.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 12:48:05.817: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:05.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-790" for this suite. • [SLOW TEST:16.461 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":74,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:06.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:48:06.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50" in namespace "downward-api-7398" to be "success or failure" Jul 1 12:48:06.985: INFO: Pod "downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139746ms Jul 1 12:48:09.099: INFO: Pod "downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122188664s Jul 1 12:48:11.162: INFO: Pod "downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185906285s Jul 1 12:48:13.263: INFO: Pod "downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286622349s STEP: Saw pod success Jul 1 12:48:13.263: INFO: Pod "downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50" satisfied condition "success or failure" Jul 1 12:48:13.296: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50 container client-container: STEP: delete the pod Jul 1 12:48:13.476: INFO: Waiting for pod downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50 to disappear Jul 1 12:48:13.494: INFO: Pod downwardapi-volume-e901ae58-dbbe-4025-8313-1e8607deae50 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:13.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7398" for this suite. • [SLOW TEST:7.486 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1112,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:13.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 12:48:13.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7304' Jul 1 12:48:13.883: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 12:48:13.883: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Jul 1 12:48:13.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7304' Jul 1 12:48:14.046: INFO: stderr: "" Jul 1 12:48:14.046: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:14.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7304" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":76,"skipped":1114,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:14.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6872/configmap-test-b944a3f8-8edb-40ff-8e7a-274ae024f526 STEP: Creating a pod to test consume configMaps Jul 1 12:48:14.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d" in namespace "configmap-6872" to be "success or failure" Jul 1 12:48:14.328: INFO: Pod "pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d": Phase="Pending", Reason="", readiness=false. Elapsed: 96.280407ms Jul 1 12:48:16.331: INFO: Pod "pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099392478s Jul 1 12:48:18.334: INFO: Pod "pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102172932s Jul 1 12:48:20.337: INFO: Pod "pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105001733s STEP: Saw pod success Jul 1 12:48:20.337: INFO: Pod "pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d" satisfied condition "success or failure" Jul 1 12:48:20.339: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d container env-test: STEP: delete the pod Jul 1 12:48:20.407: INFO: Waiting for pod pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d to disappear Jul 1 12:48:20.428: INFO: Pod pod-configmaps-3ce24bbe-f043-4178-ba48-169a20c2363d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:20.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6872" for this suite. • [SLOW TEST:6.384 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1117,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:20.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 12:48:21.502: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 12:48:23.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 12:48:25.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204501, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 12:48:28.599: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:38.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6360" for this suite. STEP: Destroying namespace "webhook-6360-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.396 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":78,"skipped":1124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:38.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 1 12:48:38.965: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:46.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3039" for this suite. • [SLOW TEST:7.923 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":79,"skipped":1148,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:46.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 1 12:48:46.820: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 12:48:46.845: INFO: Waiting for terminating namespaces to be deleted... Jul 1 12:48:46.848: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 1 12:48:46.864: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 12:48:46.864: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 12:48:46.864: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 12:48:46.864: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 12:48:46.864: INFO: pod-init-33fdfb78-4a96-4015-9837-7294381c2f3b from init-container-3039 started at 2020-07-01 12:48:39 +0000 UTC (1 container statuses recorded) Jul 1 12:48:46.864: INFO: Container run1 ready: false, restart count 0 Jul 1 12:48:46.864: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 1 12:48:46.872: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 12:48:46.872: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 12:48:46.872: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jul 1 12:48:46.872: INFO: Container kube-bench ready: false, restart count 0 Jul 1 12:48:46.872: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 12:48:46.872: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 12:48:46.872: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jul 1 12:48:46.872: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Jul 1 12:48:47.225: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Jul 1 12:48:47.225: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Jul 1 12:48:47.225: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Jul 1 12:48:47.225: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jul 1 12:48:47.225: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Jul 1 12:48:47.230: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1f4483bf-b18e-4f5e-a5ea-89fdf1104a6c.161da1c8e0a3a47a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9690/filler-pod-1f4483bf-b18e-4f5e-a5ea-89fdf1104a6c to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1f4483bf-b18e-4f5e-a5ea-89fdf1104a6c.161da1c966b97cfd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1f4483bf-b18e-4f5e-a5ea-89fdf1104a6c.161da1c99ece068e], Reason = [Created], Message = [Created container filler-pod-1f4483bf-b18e-4f5e-a5ea-89fdf1104a6c] STEP: Considering event: Type = [Normal], Name = [filler-pod-1f4483bf-b18e-4f5e-a5ea-89fdf1104a6c.161da1c9af04bd0b], Reason = [Started], Message = [Started container filler-pod-1f4483bf-b18e-4f5e-a5ea-89fdf1104a6c] STEP: Considering event: Type = [Normal], Name = [filler-pod-d498880e-31cc-4cab-9bb3-d4b6f5066920.161da1c8dd5e2382], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9690/filler-pod-d498880e-31cc-4cab-9bb3-d4b6f5066920 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-d498880e-31cc-4cab-9bb3-d4b6f5066920.161da1c9319c93ec], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d498880e-31cc-4cab-9bb3-d4b6f5066920.161da1c9749676c0], Reason = [Created], Message = [Created container filler-pod-d498880e-31cc-4cab-9bb3-d4b6f5066920] STEP: Considering event: Type = [Normal], Name = [filler-pod-d498880e-31cc-4cab-9bb3-d4b6f5066920.161da1c988a0e2df], Reason = [Started], Message = [Started container filler-pod-d498880e-31cc-4cab-9bb3-d4b6f5066920] STEP: Considering event: Type = [Warning], Name = [additional-pod.161da1ca47b37ba0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.161da1ca49ddd539], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:54.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9690" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.690 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":80,"skipped":1149,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:54.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:54.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2066" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":81,"skipped":1158,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:54.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-bd994292-4c0f-4286-8745-ad382d08d0ab STEP: Creating a pod to test consume configMaps Jul 1 12:48:54.753: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37" in namespace "projected-4699" to be "success or failure" Jul 1 12:48:54.777: INFO: Pod "pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37": Phase="Pending", Reason="", readiness=false. Elapsed: 23.653982ms Jul 1 12:48:56.781: INFO: Pod "pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027763086s Jul 1 12:48:58.784: INFO: Pod "pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030277763s STEP: Saw pod success Jul 1 12:48:58.784: INFO: Pod "pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37" satisfied condition "success or failure" Jul 1 12:48:58.786: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37 container projected-configmap-volume-test: STEP: delete the pod Jul 1 12:48:58.846: INFO: Waiting for pod pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37 to disappear Jul 1 12:48:58.861: INFO: Pod pod-projected-configmaps-d06ffb7e-4a50-40f2-a054-ca9bdf373c37 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:48:58.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4699" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1177,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:48:58.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:48:59.210: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 1 12:49:02.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9729 create -f -' Jul 1 12:49:05.593: INFO: stderr: "" Jul 1 12:49:05.593: INFO: stdout: "e2e-test-crd-publish-openapi-8821-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 1 12:49:05.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9729 delete e2e-test-crd-publish-openapi-8821-crds test-cr' Jul 1 12:49:05.715: INFO: stderr: "" Jul 1 12:49:05.715: INFO: stdout: "e2e-test-crd-publish-openapi-8821-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jul 1 12:49:05.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9729 apply -f -' Jul 1 12:49:05.967: INFO: stderr: "" Jul 1 12:49:05.967: INFO: stdout: "e2e-test-crd-publish-openapi-8821-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 1 12:49:05.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9729 delete e2e-test-crd-publish-openapi-8821-crds test-cr' Jul 1 12:49:06.085: INFO: stderr: "" Jul 1 12:49:06.085: INFO: stdout: "e2e-test-crd-publish-openapi-8821-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 1 12:49:06.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8821-crds' Jul 1 12:49:06.345: INFO: stderr: "" Jul 1 12:49:06.345: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8821-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:09.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9729" for this suite. • [SLOW TEST:10.353 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":83,"skipped":1180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:09.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 12:49:13.422: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:13.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7565" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1218,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:13.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:17.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5481" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:17.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0701 12:49:30.199055 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 12:49:30.199: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:30.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8122" for this suite. • [SLOW TEST:12.519 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":86,"skipped":1268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:30.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2527652a-73f5-40cc-82d8-b63b4783ce54 STEP: Creating a pod to test consume secrets Jul 1 12:49:30.511: INFO: Waiting up to 5m0s for pod "pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f" in namespace "secrets-1091" to be "success or failure" Jul 1 12:49:30.567: INFO: Pod "pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.039786ms Jul 1 12:49:32.947: INFO: Pod "pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435907255s Jul 1 12:49:34.951: INFO: Pod "pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f": Phase="Running", Reason="", readiness=true. Elapsed: 4.439473062s Jul 1 12:49:36.955: INFO: Pod "pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.443490475s STEP: Saw pod success Jul 1 12:49:36.955: INFO: Pod "pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f" satisfied condition "success or failure" Jul 1 12:49:37.156: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f container secret-volume-test: STEP: delete the pod Jul 1 12:49:37.354: INFO: Waiting for pod pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f to disappear Jul 1 12:49:37.360: INFO: Pod pod-secrets-024d385a-9c04-4895-bfcf-27a8e064e82f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:37.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1091" for this suite. • [SLOW TEST:7.161 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1293,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:37.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:49:37.456: INFO: Waiting up to 5m0s for pod "downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19" in namespace "downward-api-6564" to be "success or failure" Jul 1 12:49:37.468: INFO: Pod "downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19": Phase="Pending", Reason="", readiness=false. Elapsed: 11.092037ms Jul 1 12:49:39.472: INFO: Pod "downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01530854s Jul 1 12:49:41.476: INFO: Pod "downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019839566s STEP: Saw pod success Jul 1 12:49:41.476: INFO: Pod "downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19" satisfied condition "success or failure" Jul 1 12:49:41.479: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19 container client-container: STEP: delete the pod Jul 1 12:49:41.514: INFO: Waiting for pod downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19 to disappear Jul 1 12:49:41.528: INFO: Pod downwardapi-volume-884478a9-0007-40ad-b764-cc73e5eb2f19 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:41.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6564" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1301,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:41.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jul 1 12:49:41.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jul 1 12:49:41.741: INFO: stderr: "" Jul 1 12:49:41.741: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:41.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7017" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":89,"skipped":1304,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:41.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 12:49:42.922: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 12:49:44.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204582, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204582, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204583, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729204582, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 12:49:48.019: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:49:48.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1688" for this suite. STEP: Destroying namespace "webhook-1688-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.541 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":90,"skipped":1321,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:49:48.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 1 12:49:48.390: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:48.395: INFO: Number of nodes with available pods: 0 Jul 1 12:49:48.395: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:49:49.401: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:49.405: INFO: Number of nodes with available pods: 0 Jul 1 12:49:49.405: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:49:50.498: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:50.504: INFO: Number of nodes with available pods: 0 Jul 1 12:49:50.504: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:49:51.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:51.490: INFO: Number of nodes with available pods: 0 Jul 1 12:49:51.490: INFO: Node jerma-worker is running more than one daemon pod Jul 1 12:49:52.400: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:52.402: INFO: Number of nodes with available pods: 2 Jul 1 12:49:52.402: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 1 12:49:52.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:52.460: INFO: Number of nodes with available pods: 1 Jul 1 12:49:52.461: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:49:53.464: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:53.467: INFO: Number of nodes with available pods: 1 Jul 1 12:49:53.467: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:49:54.468: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:54.495: INFO: Number of nodes with available pods: 1 Jul 1 12:49:54.495: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:49:55.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:55.470: INFO: Number of nodes with available pods: 1 Jul 1 12:49:55.470: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:49:56.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:56.470: INFO: Number of nodes with available pods: 1 Jul 1 12:49:56.470: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:49:57.466: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:57.469: INFO: Number of nodes with available pods: 1 Jul 1 12:49:57.469: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:49:58.465: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:58.468: INFO: Number of nodes with available pods: 1 Jul 1 12:49:58.468: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:49:59.466: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:49:59.469: INFO: Number of nodes with available pods: 1 Jul 1 12:49:59.469: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:50:00.465: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:50:00.469: INFO: Number of nodes with available pods: 1 Jul 1 12:50:00.469: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:50:01.465: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:50:01.468: INFO: Number of nodes with available pods: 1 Jul 1 12:50:01.468: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:50:02.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:50:02.472: INFO: Number of nodes with available pods: 1 Jul 1 12:50:02.472: INFO: Node jerma-worker2 is running more than one daemon pod Jul 1 12:50:03.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 12:50:03.471: INFO: Number of nodes with available pods: 2 Jul 1 12:50:03.471: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6640, will wait for the garbage collector to delete the pods Jul 1 12:50:03.532: INFO: Deleting DaemonSet.extensions daemon-set took: 6.165378ms Jul 1 12:50:03.633: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.393759ms Jul 1 12:50:09.535: INFO: Number of nodes with available pods: 0 Jul 1 12:50:09.535: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 12:50:09.557: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6640/daemonsets","resourceVersion":"28780158"},"items":null} Jul 1 12:50:09.559: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6640/pods","resourceVersion":"28780158"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:50:09.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6640" for this suite. • [SLOW TEST:21.284 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":91,"skipped":1325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:50:09.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 1 12:50:09.704: INFO: Waiting up to 5m0s for pod "downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8" in namespace "downward-api-4854" to be "success or failure" Jul 1 12:50:09.715: INFO: Pod "downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.504541ms Jul 1 12:50:11.720: INFO: Pod "downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015621645s Jul 1 12:50:13.724: INFO: Pod "downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02003608s STEP: Saw pod success Jul 1 12:50:13.724: INFO: Pod "downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8" satisfied condition "success or failure" Jul 1 12:50:13.727: INFO: Trying to get logs from node jerma-worker pod downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8 container dapi-container: STEP: delete the pod Jul 1 12:50:13.768: INFO: Waiting for pod downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8 to disappear Jul 1 12:50:13.780: INFO: Pod downward-api-216bc085-af17-4172-bf8d-4c178ffd77c8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:50:13.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4854" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1349,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:50:13.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 1 12:50:13.866: INFO: Waiting up to 5m0s for pod "pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e" in namespace "emptydir-9456" to be "success or failure" Jul 1 12:50:13.888: INFO: Pod "pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.964468ms Jul 1 12:50:15.892: INFO: Pod "pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025997789s Jul 1 12:50:17.896: INFO: Pod "pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e": Phase="Running", Reason="", readiness=true. Elapsed: 4.030431904s Jul 1 12:50:19.901: INFO: Pod "pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035575838s STEP: Saw pod success Jul 1 12:50:19.901: INFO: Pod "pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e" satisfied condition "success or failure" Jul 1 12:50:19.904: INFO: Trying to get logs from node jerma-worker2 pod pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e container test-container: STEP: delete the pod Jul 1 12:50:19.939: INFO: Waiting for pod pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e to disappear Jul 1 12:50:19.947: INFO: Pod pod-94009b6f-2f98-49f0-bf0c-a3ddf035e17e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:50:19.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9456" for this suite. • [SLOW TEST:6.168 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:50:19.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:50:20.032: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:50:24.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3981" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1394,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:50:24.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-7eeca882-17a7-4078-937e-c637f5a284d0 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:50:24.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7519" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":95,"skipped":1412,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:50:24.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jul 1 12:50:24.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3085' Jul 1 12:50:24.764: INFO: stderr: "" Jul 1 12:50:24.764: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 12:50:24.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3085' Jul 1 12:50:24.882: INFO: stderr: "" Jul 1 12:50:24.882: INFO: stdout: "update-demo-nautilus-r8ccp update-demo-nautilus-vjpl7 " Jul 1 12:50:24.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8ccp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:24.966: INFO: stderr: "" Jul 1 12:50:24.966: INFO: stdout: "" Jul 1 12:50:24.966: INFO: update-demo-nautilus-r8ccp is created but not running Jul 1 12:50:29.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3085' Jul 1 12:50:30.068: INFO: stderr: "" Jul 1 12:50:30.068: INFO: stdout: "update-demo-nautilus-r8ccp update-demo-nautilus-vjpl7 " Jul 1 12:50:30.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8ccp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:30.170: INFO: stderr: "" Jul 1 12:50:30.170: INFO: stdout: "true" Jul 1 12:50:30.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8ccp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:30.269: INFO: stderr: "" Jul 1 12:50:30.269: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 12:50:30.269: INFO: validating pod update-demo-nautilus-r8ccp Jul 1 12:50:30.281: INFO: got data: { "image": "nautilus.jpg" } Jul 1 12:50:30.281: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 12:50:30.281: INFO: update-demo-nautilus-r8ccp is verified up and running Jul 1 12:50:30.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjpl7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:30.374: INFO: stderr: "" Jul 1 12:50:30.374: INFO: stdout: "true" Jul 1 12:50:30.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vjpl7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:30.461: INFO: stderr: "" Jul 1 12:50:30.461: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 12:50:30.461: INFO: validating pod update-demo-nautilus-vjpl7 Jul 1 12:50:30.556: INFO: got data: { "image": "nautilus.jpg" } Jul 1 12:50:30.556: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 12:50:30.557: INFO: update-demo-nautilus-vjpl7 is verified up and running STEP: rolling-update to new replication controller Jul 1 12:50:30.560: INFO: scanned /root for discovery docs: Jul 1 12:50:30.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3085' Jul 1 12:50:53.900: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 1 12:50:53.900: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 12:50:53.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3085' Jul 1 12:50:53.999: INFO: stderr: "" Jul 1 12:50:53.999: INFO: stdout: "update-demo-kitten-ngw59 update-demo-kitten-thbbp " Jul 1 12:50:53.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ngw59 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:54.095: INFO: stderr: "" Jul 1 12:50:54.095: INFO: stdout: "true" Jul 1 12:50:54.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ngw59 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:54.186: INFO: stderr: "" Jul 1 12:50:54.186: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 1 12:50:54.186: INFO: validating pod update-demo-kitten-ngw59 Jul 1 12:50:54.209: INFO: got data: { "image": "kitten.jpg" } Jul 1 12:50:54.209: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 1 12:50:54.210: INFO: update-demo-kitten-ngw59 is verified up and running Jul 1 12:50:54.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thbbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:54.310: INFO: stderr: "" Jul 1 12:50:54.310: INFO: stdout: "true" Jul 1 12:50:54.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thbbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3085' Jul 1 12:50:54.403: INFO: stderr: "" Jul 1 12:50:54.404: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 1 12:50:54.404: INFO: validating pod update-demo-kitten-thbbp Jul 1 12:50:54.413: INFO: got data: { "image": "kitten.jpg" } Jul 1 12:50:54.413: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 1 12:50:54.413: INFO: update-demo-kitten-thbbp is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:50:54.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3085" for this suite. • [SLOW TEST:30.073 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":96,"skipped":1427,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:50:54.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9895 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9895 to expose endpoints map[] Jul 1 12:50:54.537: INFO: Get endpoints failed (9.171632ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 1 12:50:55.542: INFO: successfully validated that service multi-endpoint-test in namespace services-9895 exposes endpoints map[] (1.01370417s elapsed) STEP: Creating pod pod1 in namespace services-9895 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9895 to expose endpoints map[pod1:[100]] Jul 1 12:50:58.659: INFO: successfully validated that service multi-endpoint-test in namespace services-9895 exposes endpoints map[pod1:[100]] (3.10980918s elapsed) STEP: Creating pod pod2 in namespace services-9895 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9895 to expose endpoints map[pod1:[100] pod2:[101]] Jul 1 12:51:02.974: INFO: successfully validated that service multi-endpoint-test in namespace services-9895 exposes endpoints map[pod1:[100] pod2:[101]] (4.31080637s elapsed) STEP: Deleting pod pod1 in namespace services-9895 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9895 to expose endpoints map[pod2:[101]] Jul 1 12:51:04.090: INFO: successfully validated that service multi-endpoint-test in namespace services-9895 exposes endpoints map[pod2:[101]] (1.112419024s elapsed) STEP: Deleting pod pod2 in namespace services-9895 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9895 to expose endpoints map[] Jul 1 12:51:05.144: INFO: successfully validated that service multi-endpoint-test in namespace services-9895 exposes endpoints map[] (1.049646216s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:05.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9895" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.823 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":97,"skipped":1436,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:05.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1 Jul 1 12:51:05.352: INFO: Pod name my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1: Found 0 pods out of 1 Jul 1 12:51:10.362: INFO: Pod name my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1: Found 1 pods out of 1 Jul 1 12:51:10.362: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1" are running Jul 1 12:51:10.370: INFO: Pod "my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1-42tt7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:51:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:51:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:51:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 12:51:05 +0000 UTC Reason: Message:}]) Jul 1 12:51:10.370: INFO: Trying to dial the pod Jul 1 12:51:15.381: INFO: Controller my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1: Got expected result from replica 1 [my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1-42tt7]: "my-hostname-basic-1ef083ec-afbc-4ea4-954d-a1a6ecf543c1-42tt7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:15.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8478" for this suite. • [SLOW TEST:10.145 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":98,"skipped":1442,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:15.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-7bb6e7db-fff5-4780-b358-1ab50c93b140 STEP: Creating a pod to test consume configMaps Jul 1 12:51:15.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-03d4e355-8837-4927-a215-9956a7082992" in namespace "configmap-7376" to be "success or failure" Jul 1 12:51:15.514: INFO: Pod "pod-configmaps-03d4e355-8837-4927-a215-9956a7082992": Phase="Pending", Reason="", readiness=false. Elapsed: 3.861261ms Jul 1 12:51:17.517: INFO: Pod "pod-configmaps-03d4e355-8837-4927-a215-9956a7082992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007387011s Jul 1 12:51:19.522: INFO: Pod "pod-configmaps-03d4e355-8837-4927-a215-9956a7082992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012155012s STEP: Saw pod success Jul 1 12:51:19.522: INFO: Pod "pod-configmaps-03d4e355-8837-4927-a215-9956a7082992" satisfied condition "success or failure" Jul 1 12:51:19.526: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-03d4e355-8837-4927-a215-9956a7082992 container configmap-volume-test: STEP: delete the pod Jul 1 12:51:19.556: INFO: Waiting for pod pod-configmaps-03d4e355-8837-4927-a215-9956a7082992 to disappear Jul 1 12:51:19.641: INFO: Pod pod-configmaps-03d4e355-8837-4927-a215-9956a7082992 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:19.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7376" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1456,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:19.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 1 12:51:23.736: INFO: &Pod{ObjectMeta:{send-events-561b185b-de3f-440f-9e50-7e1959f56ff2 events-5148 /api/v1/namespaces/events-5148/pods/send-events-561b185b-de3f-440f-9e50-7e1959f56ff2 e09e0afd-6285-4b09-892f-3deec9388380 28780716 0 2020-07-01 12:51:19 +0000 UTC map[name:foo time:705952621] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlhtp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlhtp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlhtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:51:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:51:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:51:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:51:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.237,StartTime:2020-07-01 12:51:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:51:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://5a87d7f3660ce9fcdb4a37174ab3bbd67600744462c3339552bc54eae0684242,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jul 1 12:51:25.751: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 1 12:51:27.755: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:27.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5148" for this suite. • [SLOW TEST:8.152 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":100,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:27.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 12:51:27.946: INFO: Waiting up to 5m0s for pod "pod-46ed28f1-dc63-46df-9146-2b3b0a054392" in namespace "emptydir-835" to be "success or failure" Jul 1 12:51:27.974: INFO: Pod "pod-46ed28f1-dc63-46df-9146-2b3b0a054392": Phase="Pending", Reason="", readiness=false. Elapsed: 27.365018ms Jul 1 12:51:29.979: INFO: Pod "pod-46ed28f1-dc63-46df-9146-2b3b0a054392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032282223s Jul 1 12:51:31.983: INFO: Pod "pod-46ed28f1-dc63-46df-9146-2b3b0a054392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03653345s STEP: Saw pod success Jul 1 12:51:31.983: INFO: Pod "pod-46ed28f1-dc63-46df-9146-2b3b0a054392" satisfied condition "success or failure" Jul 1 12:51:31.986: INFO: Trying to get logs from node jerma-worker2 pod pod-46ed28f1-dc63-46df-9146-2b3b0a054392 container test-container: STEP: delete the pod Jul 1 12:51:32.007: INFO: Waiting for pod pod-46ed28f1-dc63-46df-9146-2b3b0a054392 to disappear Jul 1 12:51:32.011: INFO: Pod pod-46ed28f1-dc63-46df-9146-2b3b0a054392 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:32.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-835" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1502,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:32.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0701 12:51:42.122437 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 12:51:42.122: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:42.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9290" for this suite. • [SLOW TEST:10.112 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":102,"skipped":1511,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:42.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 1 12:51:46.742: INFO: Successfully updated pod "annotationupdate87f5ee61-633c-4293-8637-7078a6d4e3ce" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:48.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-638" for this suite. • [SLOW TEST:6.655 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1531,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:48.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:51:59.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4523" for this suite. • [SLOW TEST:11.099 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":104,"skipped":1532,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:51:59.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3059 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jul 1 12:52:00.024: INFO: Found 0 stateful pods, waiting for 3 Jul 1 12:52:10.029: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:52:10.029: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:52:10.029: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 12:52:20.029: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:52:20.029: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:52:20.029: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 1 12:52:20.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3059 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 12:52:20.318: INFO: stderr: "I0701 12:52:20.173460 1162 log.go:172] (0xc000553130) (0xc000a8c000) Create stream\nI0701 12:52:20.173570 1162 log.go:172] (0xc000553130) (0xc000a8c000) Stream added, broadcasting: 1\nI0701 12:52:20.175661 1162 log.go:172] (0xc000553130) Reply frame received for 1\nI0701 12:52:20.175699 1162 log.go:172] (0xc000553130) (0xc000649ae0) Create stream\nI0701 12:52:20.175711 1162 log.go:172] (0xc000553130) (0xc000649ae0) Stream added, broadcasting: 3\nI0701 12:52:20.176494 1162 log.go:172] (0xc000553130) Reply frame received for 3\nI0701 12:52:20.176544 1162 log.go:172] (0xc000553130) (0xc0002fe000) Create stream\nI0701 12:52:20.176557 1162 log.go:172] (0xc000553130) (0xc0002fe000) Stream added, broadcasting: 5\nI0701 12:52:20.177545 1162 log.go:172] (0xc000553130) Reply frame received for 5\nI0701 12:52:20.266263 1162 log.go:172] (0xc000553130) Data frame received for 5\nI0701 12:52:20.266294 1162 log.go:172] (0xc0002fe000) (5) Data frame handling\nI0701 12:52:20.266316 1162 log.go:172] (0xc0002fe000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:52:20.308727 1162 log.go:172] (0xc000553130) Data frame received for 3\nI0701 12:52:20.308754 1162 log.go:172] (0xc000649ae0) (3) Data frame handling\nI0701 12:52:20.308762 1162 log.go:172] (0xc000649ae0) (3) Data frame sent\nI0701 12:52:20.308769 1162 log.go:172] (0xc000553130) Data frame received for 3\nI0701 12:52:20.308774 1162 log.go:172] (0xc000649ae0) (3) Data frame handling\nI0701 12:52:20.308788 1162 log.go:172] (0xc000553130) Data frame received for 5\nI0701 12:52:20.308801 1162 log.go:172] (0xc0002fe000) (5) Data frame handling\nI0701 12:52:20.311281 1162 log.go:172] (0xc000553130) Data frame received for 1\nI0701 12:52:20.311305 1162 log.go:172] (0xc000a8c000) (1) Data frame handling\nI0701 12:52:20.311323 1162 log.go:172] (0xc000a8c000) (1) Data frame sent\nI0701 12:52:20.311335 1162 log.go:172] (0xc000553130) (0xc000a8c000) Stream removed, broadcasting: 1\nI0701 12:52:20.311396 1162 log.go:172] (0xc000553130) Go away received\nI0701 12:52:20.311572 1162 log.go:172] (0xc000553130) (0xc000a8c000) Stream removed, broadcasting: 1\nI0701 12:52:20.311583 1162 log.go:172] (0xc000553130) (0xc000649ae0) Stream removed, broadcasting: 3\nI0701 12:52:20.311589 1162 log.go:172] (0xc000553130) (0xc0002fe000) Stream removed, broadcasting: 5\n" Jul 1 12:52:20.318: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 12:52:20.318: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 1 12:52:30.354: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 1 12:52:40.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3059 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 12:52:40.606: INFO: stderr: "I0701 12:52:40.511921 1183 log.go:172] (0xc000ae5760) (0xc00095e6e0) Create stream\nI0701 12:52:40.512343 1183 log.go:172] (0xc000ae5760) (0xc00095e6e0) Stream added, broadcasting: 1\nI0701 12:52:40.516959 1183 log.go:172] (0xc000ae5760) Reply frame received for 1\nI0701 12:52:40.517007 1183 log.go:172] (0xc000ae5760) (0xc000ace280) Create stream\nI0701 12:52:40.517038 1183 log.go:172] (0xc000ae5760) (0xc000ace280) Stream added, broadcasting: 3\nI0701 12:52:40.518960 1183 log.go:172] (0xc000ae5760) Reply frame received for 3\nI0701 12:52:40.518996 1183 log.go:172] (0xc000ae5760) (0xc0005cc640) Create stream\nI0701 12:52:40.519018 1183 log.go:172] (0xc000ae5760) (0xc0005cc640) Stream added, broadcasting: 5\nI0701 12:52:40.520022 1183 log.go:172] (0xc000ae5760) Reply frame received for 5\nI0701 12:52:40.597896 1183 log.go:172] (0xc000ae5760) Data frame received for 5\nI0701 12:52:40.598035 1183 log.go:172] (0xc0005cc640) (5) Data frame handling\nI0701 12:52:40.598055 1183 log.go:172] (0xc0005cc640) (5) Data frame sent\nI0701 12:52:40.598067 1183 log.go:172] (0xc000ae5760) Data frame received for 5\nI0701 12:52:40.598076 1183 log.go:172] (0xc0005cc640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 12:52:40.598113 1183 log.go:172] (0xc000ae5760) Data frame received for 3\nI0701 12:52:40.598134 1183 log.go:172] (0xc000ace280) (3) Data frame handling\nI0701 12:52:40.598147 1183 log.go:172] (0xc000ace280) (3) Data frame sent\nI0701 12:52:40.598153 1183 log.go:172] (0xc000ae5760) Data frame received for 3\nI0701 12:52:40.598158 1183 log.go:172] (0xc000ace280) (3) Data frame handling\nI0701 12:52:40.599260 1183 log.go:172] (0xc000ae5760) Data frame received for 1\nI0701 12:52:40.599288 1183 log.go:172] (0xc00095e6e0) (1) Data frame handling\nI0701 12:52:40.599305 1183 log.go:172] (0xc00095e6e0) (1) Data frame sent\nI0701 12:52:40.599325 1183 log.go:172] (0xc000ae5760) (0xc00095e6e0) Stream removed, broadcasting: 1\nI0701 12:52:40.599355 1183 log.go:172] (0xc000ae5760) Go away received\nI0701 12:52:40.599748 1183 log.go:172] (0xc000ae5760) (0xc00095e6e0) Stream removed, broadcasting: 1\nI0701 12:52:40.599770 1183 log.go:172] (0xc000ae5760) (0xc000ace280) Stream removed, broadcasting: 3\nI0701 12:52:40.599781 1183 log.go:172] (0xc000ae5760) (0xc0005cc640) Stream removed, broadcasting: 5\n" Jul 1 12:52:40.606: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 12:52:40.607: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 12:52:50.672: INFO: Waiting for StatefulSet statefulset-3059/ss2 to complete update Jul 1 12:52:50.672: INFO: Waiting for Pod statefulset-3059/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 12:52:50.672: INFO: Waiting for Pod statefulset-3059/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 12:53:00.684: INFO: Waiting for StatefulSet statefulset-3059/ss2 to complete update Jul 1 12:53:00.684: INFO: Waiting for Pod statefulset-3059/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 12:53:00.684: INFO: Waiting for Pod statefulset-3059/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 12:53:10.682: INFO: Waiting for StatefulSet statefulset-3059/ss2 to complete update STEP: Rolling back to a previous revision Jul 1 12:53:20.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3059 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 12:53:20.960: INFO: stderr: "I0701 12:53:20.804074 1204 log.go:172] (0xc000982630) (0xc0004db540) Create stream\nI0701 12:53:20.804151 1204 log.go:172] (0xc000982630) (0xc0004db540) Stream added, broadcasting: 1\nI0701 12:53:20.831199 1204 log.go:172] (0xc000982630) Reply frame received for 1\nI0701 12:53:20.831236 1204 log.go:172] (0xc000982630) (0xc000a4c000) Create stream\nI0701 12:53:20.831250 1204 log.go:172] (0xc000982630) (0xc000a4c000) Stream added, broadcasting: 3\nI0701 12:53:20.832047 1204 log.go:172] (0xc000982630) Reply frame received for 3\nI0701 12:53:20.832094 1204 log.go:172] (0xc000982630) (0xc000713ae0) Create stream\nI0701 12:53:20.832108 1204 log.go:172] (0xc000982630) (0xc000713ae0) Stream added, broadcasting: 5\nI0701 12:53:20.833919 1204 log.go:172] (0xc000982630) Reply frame received for 5\nI0701 12:53:20.913309 1204 log.go:172] (0xc000982630) Data frame received for 5\nI0701 12:53:20.913338 1204 log.go:172] (0xc000713ae0) (5) Data frame handling\nI0701 12:53:20.913359 1204 log.go:172] (0xc000713ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 12:53:20.953940 1204 log.go:172] (0xc000982630) Data frame received for 3\nI0701 12:53:20.953960 1204 log.go:172] (0xc000a4c000) (3) Data frame handling\nI0701 12:53:20.953967 1204 log.go:172] (0xc000a4c000) (3) Data frame sent\nI0701 12:53:20.953972 1204 log.go:172] (0xc000982630) Data frame received for 3\nI0701 12:53:20.953976 1204 log.go:172] (0xc000a4c000) (3) Data frame handling\nI0701 12:53:20.954332 1204 log.go:172] (0xc000982630) Data frame received for 5\nI0701 12:53:20.954346 1204 log.go:172] (0xc000713ae0) (5) Data frame handling\nI0701 12:53:20.955863 1204 log.go:172] (0xc000982630) Data frame received for 1\nI0701 12:53:20.955891 1204 log.go:172] (0xc0004db540) (1) Data frame handling\nI0701 12:53:20.955905 1204 log.go:172] (0xc0004db540) (1) Data frame sent\nI0701 12:53:20.955919 1204 log.go:172] (0xc000982630) (0xc0004db540) Stream removed, broadcasting: 1\nI0701 12:53:20.956239 1204 log.go:172] (0xc000982630) (0xc0004db540) Stream removed, broadcasting: 1\nI0701 12:53:20.956257 1204 log.go:172] (0xc000982630) (0xc000a4c000) Stream removed, broadcasting: 3\nI0701 12:53:20.956355 1204 log.go:172] (0xc000982630) (0xc000713ae0) Stream removed, broadcasting: 5\n" Jul 1 12:53:20.960: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 12:53:20.960: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 12:53:31.025: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 1 12:53:41.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3059 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 12:53:41.329: INFO: stderr: "I0701 12:53:41.220671 1225 log.go:172] (0xc000a62580) (0xc00075d4a0) Create stream\nI0701 12:53:41.220735 1225 log.go:172] (0xc000a62580) (0xc00075d4a0) Stream added, broadcasting: 1\nI0701 12:53:41.222986 1225 log.go:172] (0xc000a62580) Reply frame received for 1\nI0701 12:53:41.223034 1225 log.go:172] (0xc000a62580) (0xc0005dda40) Create stream\nI0701 12:53:41.223092 1225 log.go:172] (0xc000a62580) (0xc0005dda40) Stream added, broadcasting: 3\nI0701 12:53:41.224023 1225 log.go:172] (0xc000a62580) Reply frame received for 3\nI0701 12:53:41.224108 1225 log.go:172] (0xc000a62580) (0xc000934000) Create stream\nI0701 12:53:41.224132 1225 log.go:172] (0xc000a62580) (0xc000934000) Stream added, broadcasting: 5\nI0701 12:53:41.224925 1225 log.go:172] (0xc000a62580) Reply frame received for 5\nI0701 12:53:41.322133 1225 log.go:172] (0xc000a62580) Data frame received for 3\nI0701 12:53:41.322175 1225 log.go:172] (0xc0005dda40) (3) Data frame handling\nI0701 12:53:41.322193 1225 log.go:172] (0xc0005dda40) (3) Data frame sent\nI0701 12:53:41.322203 1225 log.go:172] (0xc000a62580) Data frame received for 3\nI0701 12:53:41.322214 1225 log.go:172] (0xc0005dda40) (3) Data frame handling\nI0701 12:53:41.322267 1225 log.go:172] (0xc000a62580) Data frame received for 5\nI0701 12:53:41.322301 1225 log.go:172] (0xc000934000) (5) Data frame handling\nI0701 12:53:41.322333 1225 log.go:172] (0xc000934000) (5) Data frame sent\nI0701 12:53:41.322355 1225 log.go:172] (0xc000a62580) Data frame received for 5\nI0701 12:53:41.322368 1225 log.go:172] (0xc000934000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 12:53:41.323795 1225 log.go:172] (0xc000a62580) Data frame received for 1\nI0701 12:53:41.323816 1225 log.go:172] (0xc00075d4a0) (1) Data frame handling\nI0701 12:53:41.323838 1225 log.go:172] (0xc00075d4a0) (1) Data frame sent\nI0701 12:53:41.323854 1225 log.go:172] (0xc000a62580) (0xc00075d4a0) Stream removed, broadcasting: 1\nI0701 12:53:41.324066 1225 log.go:172] (0xc000a62580) Go away received\nI0701 12:53:41.324263 1225 log.go:172] (0xc000a62580) (0xc00075d4a0) Stream removed, broadcasting: 1\nI0701 12:53:41.324285 1225 log.go:172] (0xc000a62580) (0xc0005dda40) Stream removed, broadcasting: 3\nI0701 12:53:41.324298 1225 log.go:172] (0xc000a62580) (0xc000934000) Stream removed, broadcasting: 5\n" Jul 1 12:53:41.329: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 12:53:41.329: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 12:53:51.584: INFO: Waiting for StatefulSet statefulset-3059/ss2 to complete update Jul 1 12:53:51.584: INFO: Waiting for Pod statefulset-3059/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 1 12:53:51.584: INFO: Waiting for Pod statefulset-3059/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 1 12:54:01.594: INFO: Waiting for StatefulSet statefulset-3059/ss2 to complete update Jul 1 12:54:01.594: INFO: Waiting for Pod statefulset-3059/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 1 12:54:11.595: INFO: Deleting all statefulset in ns statefulset-3059 Jul 1 12:54:11.598: INFO: Scaling statefulset ss2 to 0 Jul 1 12:54:41.664: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 12:54:42.296: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:54:42.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3059" for this suite. • [SLOW TEST:162.553 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":105,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:54:42.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 12:54:43.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2883' Jul 1 12:54:44.041: INFO: stderr: "" Jul 1 12:54:44.041: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Jul 1 12:54:44.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2883' Jul 1 12:54:49.116: INFO: stderr: "" Jul 1 12:54:49.116: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:54:49.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2883" for this suite. • [SLOW TEST:7.249 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":106,"skipped":1589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:54:49.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jul 1 12:54:50.661: INFO: Waiting up to 5m0s for pod "pod-eb174ad8-b221-4690-96b1-8d50a24625aa" in namespace "emptydir-6661" to be "success or failure" Jul 1 12:54:50.668: INFO: Pod "pod-eb174ad8-b221-4690-96b1-8d50a24625aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.56705ms Jul 1 12:54:52.814: INFO: Pod "pod-eb174ad8-b221-4690-96b1-8d50a24625aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152673939s Jul 1 12:54:54.843: INFO: Pod "pod-eb174ad8-b221-4690-96b1-8d50a24625aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181515998s STEP: Saw pod success Jul 1 12:54:54.843: INFO: Pod "pod-eb174ad8-b221-4690-96b1-8d50a24625aa" satisfied condition "success or failure" Jul 1 12:54:54.855: INFO: Trying to get logs from node jerma-worker2 pod pod-eb174ad8-b221-4690-96b1-8d50a24625aa container test-container: STEP: delete the pod Jul 1 12:54:55.359: INFO: Waiting for pod pod-eb174ad8-b221-4690-96b1-8d50a24625aa to disappear Jul 1 12:54:55.437: INFO: Pod pod-eb174ad8-b221-4690-96b1-8d50a24625aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:54:55.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6661" for this suite. • [SLOW TEST:5.755 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:54:55.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:54:55.688: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 1 12:55:00.691: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 12:55:00.691: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 1 12:55:01.055: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4149 /apis/apps/v1/namespaces/deployment-4149/deployments/test-cleanup-deployment b47383d8-69b7-4f89-a52c-4c3c05d2a08d 28781913 1 2020-07-01 12:55:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ab28b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jul 1 12:55:01.246: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jul 1 12:55:01.246: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 1 12:55:01.246: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4149 /apis/apps/v1/namespaces/deployment-4149/replicasets/test-cleanup-controller 941f73c4-f8fc-49d5-a226-7f72f4b333e1 28781915 1 2020-07-01 12:54:55 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b47383d8-69b7-4f89-a52c-4c3c05d2a08d 0xc002ab2be7 0xc002ab2be8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ab2c48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 1 12:55:01.666: INFO: Pod "test-cleanup-controller-5r6fw" is available: &Pod{ObjectMeta:{test-cleanup-controller-5r6fw test-cleanup-controller- deployment-4149 /api/v1/namespaces/deployment-4149/pods/test-cleanup-controller-5r6fw 7ee7ec5a-2c14-4a83-aeb1-3eb1021b3d02 28781899 0 2020-07-01 12:54:55 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 941f73c4-f8fc-49d5-a226-7f72f4b333e1 0xc0025fada7 0xc0025fada8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cnrk6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cnrk6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cnrk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:54:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 12:54:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.246,StartTime:2020-07-01 12:54:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 12:54:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f6613ec9ec001fcbf1f875b6414a960e1b354438389be6e67a221e94c189ac74,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:55:01.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4149" for this suite. • [SLOW TEST:7.308 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":108,"skipped":1782,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:55:02.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-436b2fc8-cec1-46a2-a39d-fd0589982adf STEP: Creating the pod STEP: Updating configmap configmap-test-upd-436b2fc8-cec1-46a2-a39d-fd0589982adf STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:56:21.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2355" for this suite. • [SLOW TEST:78.632 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1788,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:56:21.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 12:56:21.472: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d" in namespace "downward-api-1132" to be "success or failure" Jul 1 12:56:21.477: INFO: Pod "downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423076ms Jul 1 12:56:23.481: INFO: Pod "downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008633873s Jul 1 12:56:25.486: INFO: Pod "downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01323015s STEP: Saw pod success Jul 1 12:56:25.486: INFO: Pod "downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d" satisfied condition "success or failure" Jul 1 12:56:25.488: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d container client-container: STEP: delete the pod Jul 1 12:56:25.560: INFO: Waiting for pod downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d to disappear Jul 1 12:56:25.566: INFO: Pod downwardapi-volume-1ad212b9-8837-4390-8d87-1fa25b6e4b6d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:56:25.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1132" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1803,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:56:25.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 12:56:25.644: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0883b3fc-5e1b-4a8a-9667-b499ae333b7b" in namespace "security-context-test-8467" to be "success or failure" Jul 1 12:56:25.647: INFO: Pod "busybox-privileged-false-0883b3fc-5e1b-4a8a-9667-b499ae333b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.925583ms Jul 1 12:56:27.651: INFO: Pod "busybox-privileged-false-0883b3fc-5e1b-4a8a-9667-b499ae333b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006726645s Jul 1 12:56:29.655: INFO: Pod "busybox-privileged-false-0883b3fc-5e1b-4a8a-9667-b499ae333b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011246604s Jul 1 12:56:31.803: INFO: Pod "busybox-privileged-false-0883b3fc-5e1b-4a8a-9667-b499ae333b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159506513s Jul 1 12:56:31.804: INFO: Pod "busybox-privileged-false-0883b3fc-5e1b-4a8a-9667-b499ae333b7b" satisfied condition "success or failure" Jul 1 12:56:31.812: INFO: Got logs for pod "busybox-privileged-false-0883b3fc-5e1b-4a8a-9667-b499ae333b7b": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 12:56:31.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8467" for this suite. • [SLOW TEST:6.244 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 12:56:31.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-6aa8ac93-fcd1-4cd3-90db-ee6824f6aefa in namespace container-probe-5393 Jul 1 12:56:38.180: INFO: Started pod test-webserver-6aa8ac93-fcd1-4cd3-90db-ee6824f6aefa in namespace container-probe-5393 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 12:56:38.183: INFO: Initial restart count of pod test-webserver-6aa8ac93-fcd1-4cd3-90db-ee6824f6aefa is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:00:39.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5393" for this suite. • [SLOW TEST:247.354 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1835,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:00:39.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5341/configmap-test-00c60183-dc14-423e-9010-7e75833a3677 STEP: Creating a pod to test consume configMaps Jul 1 13:00:39.568: INFO: Waiting up to 5m0s for pod "pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85" in namespace "configmap-5341" to be "success or failure" Jul 1 13:00:39.613: INFO: Pod "pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85": Phase="Pending", Reason="", readiness=false. Elapsed: 45.26992ms Jul 1 13:00:41.617: INFO: Pod "pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049554144s Jul 1 13:00:43.644: INFO: Pod "pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076840858s Jul 1 13:00:45.663: INFO: Pod "pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85": Phase="Running", Reason="", readiness=true. Elapsed: 6.094913009s Jul 1 13:00:47.666: INFO: Pod "pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098566196s STEP: Saw pod success Jul 1 13:00:47.666: INFO: Pod "pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85" satisfied condition "success or failure" Jul 1 13:00:47.669: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85 container env-test: STEP: delete the pod Jul 1 13:00:47.703: INFO: Waiting for pod pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85 to disappear Jul 1 13:00:47.812: INFO: Pod pod-configmaps-d59b7b22-667e-4a6c-aefd-7d5df9a79c85 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:00:47.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5341" for this suite. • [SLOW TEST:8.646 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:00:47.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:00:47.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac" in namespace "downward-api-5602" to be "success or failure" Jul 1 13:00:48.008: INFO: Pod "downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 15.247123ms Jul 1 13:00:50.238: INFO: Pod "downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245994933s Jul 1 13:00:52.295: INFO: Pod "downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302476735s Jul 1 13:00:54.302: INFO: Pod "downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac": Phase="Running", Reason="", readiness=true. Elapsed: 6.309158994s Jul 1 13:00:56.306: INFO: Pod "downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.313761731s STEP: Saw pod success Jul 1 13:00:56.306: INFO: Pod "downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac" satisfied condition "success or failure" Jul 1 13:00:56.309: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac container client-container: STEP: delete the pod Jul 1 13:00:56.392: INFO: Waiting for pod downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac to disappear Jul 1 13:00:56.423: INFO: Pod downwardapi-volume-aa20b9bd-55b0-41e1-82f9-8d6f0f84e3ac no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:00:56.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5602" for this suite. • [SLOW TEST:8.609 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1876,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:00:56.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 in namespace container-probe-1551 Jul 1 13:01:00.728: INFO: Started pod liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 in namespace container-probe-1551 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 13:01:00.731: INFO: Initial restart count of pod liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 is 0 Jul 1 13:01:18.775: INFO: Restart count of pod container-probe-1551/liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 is now 1 (18.043924864s elapsed) Jul 1 13:01:38.841: INFO: Restart count of pod container-probe-1551/liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 is now 2 (38.109622783s elapsed) Jul 1 13:01:58.882: INFO: Restart count of pod container-probe-1551/liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 is now 3 (58.150706831s elapsed) Jul 1 13:02:17.005: INFO: Restart count of pod container-probe-1551/liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 is now 4 (1m16.273651432s elapsed) Jul 1 13:03:29.927: INFO: Restart count of pod container-probe-1551/liveness-e983bb32-2f0b-4a00-b762-8315abe206a4 is now 5 (2m29.19550402s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:03:29.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1551" for this suite. • [SLOW TEST:153.586 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1891,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:03:30.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:03:45.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5430" for this suite. STEP: Destroying namespace "nsdeletetest-107" for this suite. Jul 1 13:03:45.646: INFO: Namespace nsdeletetest-107 was already deleted STEP: Destroying namespace "nsdeletetest-1931" for this suite. • [SLOW TEST:15.631 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":116,"skipped":1899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:03:45.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 1 13:03:45.712: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 13:03:45.724: INFO: Waiting for terminating namespaces to be deleted... Jul 1 13:03:45.726: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 1 13:03:45.740: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:03:45.740: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:03:45.740: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:03:45.740: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 13:03:45.740: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 1 13:03:45.783: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jul 1 13:03:45.783: INFO: Container kube-hunter ready: false, restart count 0 Jul 1 13:03:45.783: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:03:45.784: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:03:45.784: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jul 1 13:03:45.784: INFO: Container kube-bench ready: false, restart count 0 Jul 1 13:03:45.784: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:03:45.784: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161da29a13e8be0c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:03:46.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8449" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":117,"skipped":1946,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:03:46.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9301.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9301.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9301.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9301.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9301.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9301.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 13:03:52.976: INFO: DNS probes using dns-9301/dns-test-0662dedc-5dee-4ffb-8174-ff3fcbfd2422 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:03:53.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9301" for this suite. • [SLOW TEST:7.147 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":118,"skipped":1965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:03:53.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:03:54.397: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:04:00.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-629" for this suite. • [SLOW TEST:6.575 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2086,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:04:00.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-575b0e63-3e75-4a4d-9f9f-9afb9535db93 STEP: Creating a pod to test consume secrets Jul 1 13:04:00.661: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3" in namespace "projected-8973" to be "success or failure" Jul 1 13:04:00.664: INFO: Pod "pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868575ms Jul 1 13:04:02.669: INFO: Pod "pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00765903s Jul 1 13:04:04.673: INFO: Pod "pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011995063s STEP: Saw pod success Jul 1 13:04:04.673: INFO: Pod "pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3" satisfied condition "success or failure" Jul 1 13:04:04.676: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3 container secret-volume-test: STEP: delete the pod Jul 1 13:04:04.751: INFO: Waiting for pod pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3 to disappear Jul 1 13:04:04.773: INFO: Pod pod-projected-secrets-c3ff55d6-8aca-491e-b47d-cae52bc304a3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:04:04.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8973" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2089,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:04:04.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:04:05.653: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:04:07.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:04:09.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205445, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:04:12.862: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:04:12.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4548-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:04:14.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4715" for this suite. STEP: Destroying namespace "webhook-4715-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.448 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":121,"skipped":2091,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:04:14.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:04:47.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5585" for this suite. • [SLOW TEST:33.135 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2103,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:04:47.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 13:04:47.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1378' Jul 1 13:04:51.092: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 13:04:51.092: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Jul 1 13:04:53.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1378' Jul 1 13:04:53.530: INFO: stderr: "" Jul 1 13:04:53.530: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:04:53.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1378" for this suite. • [SLOW TEST:6.333 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1483 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":123,"skipped":2112,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:04:53.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 1 13:04:54.105: INFO: PodSpec: initContainers in spec.initContainers Jul 1 13:05:46.777: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c85c5045-404a-4f02-bec8-7c0ec0cda1a6", GenerateName:"", Namespace:"init-container-5113", SelfLink:"/api/v1/namespaces/init-container-5113/pods/pod-init-c85c5045-404a-4f02-bec8-7c0ec0cda1a6", UID:"2a64d4a7-0536-4ed6-88ea-b212999d234a", ResourceVersion:"28784321", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729205494, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"105800856"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-q9qgp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003104100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q9qgp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q9qgp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q9qgp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002db2278), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023e20c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002db2300)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002db2320)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002db2328), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002db232c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205494, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205494, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205494, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205494, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.39", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.39"}}, StartTime:(*v1.Time)(0xc0029e2160), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a041c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a04230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://bd0ed34c7c987495014c84cc84dd39e700936a90bf08d5f9ad5e010a6a6d075e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029e21a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029e2180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002db23af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:05:46.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5113" for this suite. • [SLOW TEST:53.113 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":124,"skipped":2121,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:05:46.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 1 13:05:46.910: INFO: Waiting up to 5m0s for pod "pod-b79017f2-6ead-42c3-a70f-49cfec130b38" in namespace "emptydir-6257" to be "success or failure" Jul 1 13:05:46.925: INFO: Pod "pod-b79017f2-6ead-42c3-a70f-49cfec130b38": Phase="Pending", Reason="", readiness=false. Elapsed: 15.80329ms Jul 1 13:05:49.000: INFO: Pod "pod-b79017f2-6ead-42c3-a70f-49cfec130b38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090835298s Jul 1 13:05:51.005: INFO: Pod "pod-b79017f2-6ead-42c3-a70f-49cfec130b38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095406301s STEP: Saw pod success Jul 1 13:05:51.005: INFO: Pod "pod-b79017f2-6ead-42c3-a70f-49cfec130b38" satisfied condition "success or failure" Jul 1 13:05:51.009: INFO: Trying to get logs from node jerma-worker pod pod-b79017f2-6ead-42c3-a70f-49cfec130b38 container test-container: STEP: delete the pod Jul 1 13:05:51.105: INFO: Waiting for pod pod-b79017f2-6ead-42c3-a70f-49cfec130b38 to disappear Jul 1 13:05:51.119: INFO: Pod pod-b79017f2-6ead-42c3-a70f-49cfec130b38 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:05:51.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6257" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2139,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:05:51.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 1 13:05:55.799: INFO: Successfully updated pod "pod-update-f4c1950f-4e49-4f78-bcc2-af49b28079e2" STEP: verifying the updated pod is in kubernetes Jul 1 13:05:55.808: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:05:55.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1423" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2152,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:05:55.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:05:55.896: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:05:57.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8537" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":127,"skipped":2154,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:05:57.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:05:57.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be" in namespace "projected-1213" to be "success or failure" Jul 1 13:05:57.272: INFO: Pod "downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be": Phase="Pending", Reason="", readiness=false. Elapsed: 18.972521ms Jul 1 13:05:59.277: INFO: Pod "downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023733711s Jul 1 13:06:01.350: INFO: Pod "downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be": Phase="Running", Reason="", readiness=true. Elapsed: 4.09706569s Jul 1 13:06:03.354: INFO: Pod "downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100886629s STEP: Saw pod success Jul 1 13:06:03.354: INFO: Pod "downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be" satisfied condition "success or failure" Jul 1 13:06:03.357: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be container client-container: STEP: delete the pod Jul 1 13:06:03.418: INFO: Waiting for pod downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be to disappear Jul 1 13:06:03.423: INFO: Pod downwardapi-volume-3f9cbefe-92b5-44ae-a8ac-efbd9daac8be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:06:03.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1213" for this suite. • [SLOW TEST:6.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2170,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:06:03.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 1 13:06:03.607: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1767 /api/v1/namespaces/watch-1767/configmaps/e2e-watch-test-label-changed 4e3c0bb3-4f8c-4618-acfd-bc5844f17b43 28784471 0 2020-07-01 13:06:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 13:06:03.608: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1767 /api/v1/namespaces/watch-1767/configmaps/e2e-watch-test-label-changed 4e3c0bb3-4f8c-4618-acfd-bc5844f17b43 28784472 0 2020-07-01 13:06:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 1 13:06:03.608: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1767 /api/v1/namespaces/watch-1767/configmaps/e2e-watch-test-label-changed 4e3c0bb3-4f8c-4618-acfd-bc5844f17b43 28784473 0 2020-07-01 13:06:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 1 13:06:13.690: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1767 /api/v1/namespaces/watch-1767/configmaps/e2e-watch-test-label-changed 4e3c0bb3-4f8c-4618-acfd-bc5844f17b43 28784518 0 2020-07-01 13:06:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 13:06:13.690: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1767 /api/v1/namespaces/watch-1767/configmaps/e2e-watch-test-label-changed 4e3c0bb3-4f8c-4618-acfd-bc5844f17b43 28784519 0 2020-07-01 13:06:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jul 1 13:06:13.690: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1767 /api/v1/namespaces/watch-1767/configmaps/e2e-watch-test-label-changed 4e3c0bb3-4f8c-4618-acfd-bc5844f17b43 28784520 0 2020-07-01 13:06:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:06:13.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1767" for this suite. • [SLOW TEST:10.304 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":129,"skipped":2188,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:06:13.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 1 13:06:18.479: INFO: Successfully updated pod "labelsupdateaca212c2-cda9-4215-b4e3-578d95268ee2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:06:22.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3608" for this suite. • [SLOW TEST:9.123 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2199,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:06:22.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:06:39.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5273" for this suite. • [SLOW TEST:17.122 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":131,"skipped":2202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:06:39.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8084 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 13:06:40.175: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 13:07:08.925: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.4:8080/dial?request=hostname&protocol=udp&host=10.244.1.3&port=8081&tries=1'] Namespace:pod-network-test-8084 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 13:07:08.925: INFO: >>> kubeConfig: /root/.kube/config I0701 13:07:08.958299 6 log.go:172] (0xc002a7e580) (0xc001d01ae0) Create stream I0701 13:07:08.958340 6 log.go:172] (0xc002a7e580) (0xc001d01ae0) Stream added, broadcasting: 1 I0701 13:07:08.960619 6 log.go:172] (0xc002a7e580) Reply frame received for 1 I0701 13:07:08.960657 6 log.go:172] (0xc002a7e580) (0xc001dc28c0) Create stream I0701 13:07:08.960670 6 log.go:172] (0xc002a7e580) (0xc001dc28c0) Stream added, broadcasting: 3 I0701 13:07:08.961962 6 log.go:172] (0xc002a7e580) Reply frame received for 3 I0701 13:07:08.962004 6 log.go:172] (0xc002a7e580) (0xc000275720) Create stream I0701 13:07:08.962021 6 log.go:172] (0xc002a7e580) (0xc000275720) Stream added, broadcasting: 5 I0701 13:07:08.962946 6 log.go:172] (0xc002a7e580) Reply frame received for 5 I0701 13:07:09.135199 6 log.go:172] (0xc002a7e580) Data frame received for 3 I0701 13:07:09.135238 6 log.go:172] (0xc001dc28c0) (3) Data frame handling I0701 13:07:09.135264 6 log.go:172] (0xc001dc28c0) (3) Data frame sent I0701 13:07:09.135863 6 log.go:172] (0xc002a7e580) Data frame received for 3 I0701 13:07:09.135879 6 log.go:172] (0xc001dc28c0) (3) Data frame handling I0701 13:07:09.135904 6 log.go:172] (0xc002a7e580) Data frame received for 5 I0701 13:07:09.135932 6 log.go:172] (0xc000275720) (5) Data frame handling I0701 13:07:09.138498 6 log.go:172] (0xc002a7e580) Data frame received for 1 I0701 13:07:09.138517 6 log.go:172] (0xc001d01ae0) (1) Data frame handling I0701 13:07:09.138528 6 log.go:172] (0xc001d01ae0) (1) Data frame sent I0701 13:07:09.138543 6 log.go:172] (0xc002a7e580) (0xc001d01ae0) Stream removed, broadcasting: 1 I0701 13:07:09.138631 6 log.go:172] (0xc002a7e580) (0xc001d01ae0) Stream removed, broadcasting: 1 I0701 13:07:09.138686 6 log.go:172] (0xc002a7e580) (0xc001dc28c0) Stream removed, broadcasting: 3 I0701 13:07:09.138704 6 log.go:172] (0xc002a7e580) (0xc000275720) Stream removed, broadcasting: 5 Jul 1 13:07:09.138: INFO: Waiting for responses: map[] I0701 13:07:09.138799 6 log.go:172] (0xc002a7e580) Go away received Jul 1 13:07:09.143: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.4:8080/dial?request=hostname&protocol=udp&host=10.244.2.42&port=8081&tries=1'] Namespace:pod-network-test-8084 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 13:07:09.143: INFO: >>> kubeConfig: /root/.kube/config I0701 13:07:09.171056 6 log.go:172] (0xc0033193f0) (0xc001dc2dc0) Create stream I0701 13:07:09.171084 6 log.go:172] (0xc0033193f0) (0xc001dc2dc0) Stream added, broadcasting: 1 I0701 13:07:09.172919 6 log.go:172] (0xc0033193f0) Reply frame received for 1 I0701 13:07:09.172954 6 log.go:172] (0xc0033193f0) (0xc0008f40a0) Create stream I0701 13:07:09.172967 6 log.go:172] (0xc0033193f0) (0xc0008f40a0) Stream added, broadcasting: 3 I0701 13:07:09.174114 6 log.go:172] (0xc0033193f0) Reply frame received for 3 I0701 13:07:09.174182 6 log.go:172] (0xc0033193f0) (0xc0008f4e60) Create stream I0701 13:07:09.174195 6 log.go:172] (0xc0033193f0) (0xc0008f4e60) Stream added, broadcasting: 5 I0701 13:07:09.174937 6 log.go:172] (0xc0033193f0) Reply frame received for 5 I0701 13:07:09.241764 6 log.go:172] (0xc0033193f0) Data frame received for 3 I0701 13:07:09.241781 6 log.go:172] (0xc0008f40a0) (3) Data frame handling I0701 13:07:09.241796 6 log.go:172] (0xc0008f40a0) (3) Data frame sent I0701 13:07:09.242285 6 log.go:172] (0xc0033193f0) Data frame received for 5 I0701 13:07:09.242317 6 log.go:172] (0xc0008f4e60) (5) Data frame handling I0701 13:07:09.242340 6 log.go:172] (0xc0033193f0) Data frame received for 3 I0701 13:07:09.242353 6 log.go:172] (0xc0008f40a0) (3) Data frame handling I0701 13:07:09.243430 6 log.go:172] (0xc0033193f0) Data frame received for 1 I0701 13:07:09.243464 6 log.go:172] (0xc001dc2dc0) (1) Data frame handling I0701 13:07:09.243501 6 log.go:172] (0xc001dc2dc0) (1) Data frame sent I0701 13:07:09.243524 6 log.go:172] (0xc0033193f0) (0xc001dc2dc0) Stream removed, broadcasting: 1 I0701 13:07:09.243549 6 log.go:172] (0xc0033193f0) Go away received I0701 13:07:09.243642 6 log.go:172] (0xc0033193f0) (0xc001dc2dc0) Stream removed, broadcasting: 1 I0701 13:07:09.243655 6 log.go:172] (0xc0033193f0) (0xc0008f40a0) Stream removed, broadcasting: 3 I0701 13:07:09.243662 6 log.go:172] (0xc0033193f0) (0xc0008f4e60) Stream removed, broadcasting: 5 Jul 1 13:07:09.243: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:07:09.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8084" for this suite. • [SLOW TEST:29.270 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2230,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:07:09.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jul 1 13:07:09.450: INFO: Waiting up to 5m0s for pod "var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c" in namespace "var-expansion-8010" to be "success or failure" Jul 1 13:07:09.501: INFO: Pod "var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.027456ms Jul 1 13:07:11.505: INFO: Pod "var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055483646s Jul 1 13:07:13.510: INFO: Pod "var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060023566s Jul 1 13:07:15.564: INFO: Pod "var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114000866s STEP: Saw pod success Jul 1 13:07:15.564: INFO: Pod "var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c" satisfied condition "success or failure" Jul 1 13:07:15.567: INFO: Trying to get logs from node jerma-worker pod var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c container dapi-container: STEP: delete the pod Jul 1 13:07:15.627: INFO: Waiting for pod var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c to disappear Jul 1 13:07:15.680: INFO: Pod var-expansion-909021ab-4156-45eb-a0bb-03ad048eb44c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:07:15.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8010" for this suite. • [SLOW TEST:6.442 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2236,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:07:15.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:07:16.152: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 1 13:07:21.157: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 13:07:21.157: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 1 13:07:23.160: INFO: Creating deployment "test-rollover-deployment" Jul 1 13:07:23.267: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 1 13:07:25.273: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 1 13:07:25.278: INFO: Ensure that both replica sets have 1 created replica Jul 1 13:07:25.283: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 1 13:07:25.288: INFO: Updating deployment test-rollover-deployment Jul 1 13:07:25.288: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 1 13:07:27.296: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 1 13:07:27.300: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 1 13:07:27.305: INFO: all replica sets need to contain the pod-template-hash label Jul 1 13:07:27.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205645, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:07:29.315: INFO: all replica sets need to contain the pod-template-hash label Jul 1 13:07:29.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205648, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:07:31.314: INFO: all replica sets need to contain the pod-template-hash label Jul 1 13:07:31.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205648, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:07:33.314: INFO: all replica sets need to contain the pod-template-hash label Jul 1 13:07:33.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205648, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:07:35.314: INFO: all replica sets need to contain the pod-template-hash label Jul 1 13:07:35.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205648, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:07:37.314: INFO: all replica sets need to contain the pod-template-hash label Jul 1 13:07:37.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205648, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729205643, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:07:39.415: INFO: Jul 1 13:07:39.415: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 1 13:07:39.421: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4212 /apis/apps/v1/namespaces/deployment-4212/deployments/test-rollover-deployment 6c3bbb2c-31ea-4027-8348-4899d7589051 28784978 2 2020-07-01 13:07:23 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f44398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-01 13:07:23 +0000 UTC,LastTransitionTime:2020-07-01 13:07:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-07-01 13:07:39 +0000 UTC,LastTransitionTime:2020-07-01 13:07:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 1 13:07:39.423: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4212 /apis/apps/v1/namespaces/deployment-4212/replicasets/test-rollover-deployment-574d6dfbff 3662dfc3-ebc4-4bb8-a4cc-e2c29c6edf37 28784967 2 2020-07-01 13:07:25 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 6c3bbb2c-31ea-4027-8348-4899d7589051 0xc002f44867 0xc002f44868}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f448d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 1 13:07:39.423: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 1 13:07:39.423: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4212 /apis/apps/v1/namespaces/deployment-4212/replicasets/test-rollover-controller f40887f1-0c87-47ed-be66-cd5ce1fb99d9 28784976 2 2020-07-01 13:07:16 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 6c3bbb2c-31ea-4027-8348-4899d7589051 0xc002f44787 0xc002f44788}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002f447e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 13:07:39.423: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4212 /apis/apps/v1/namespaces/deployment-4212/replicasets/test-rollover-deployment-f6c94f66c 2a4d8452-f0d9-4cef-b258-03419d66c632 28784916 2 2020-07-01 13:07:23 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 6c3bbb2c-31ea-4027-8348-4899d7589051 0xc002f44940 0xc002f44941}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f449b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 13:07:39.426: INFO: Pod "test-rollover-deployment-574d6dfbff-5p9h8" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5p9h8 test-rollover-deployment-574d6dfbff- deployment-4212 /api/v1/namespaces/deployment-4212/pods/test-rollover-deployment-574d6dfbff-5p9h8 3c6ecbee-bb11-4e28-943e-af09b0bf77b4 28784935 0 2020-07-01 13:07:25 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 3662dfc3-ebc4-4bb8-a4cc-e2c29c6edf37 0xc002ddccf7 0xc002ddccf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hczqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hczqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hczqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:07:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:07:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:07:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:07:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.7,StartTime:2020-07-01 13:07:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:07:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2645d658cfc4273309693faf6eb3d46f3ba321c513b33deda750f3d0c4d18794,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:07:39.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4212" for this suite. • [SLOW TEST:23.740 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":134,"skipped":2238,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:07:39.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 13:07:44.756: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:07:45.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4927" for this suite. • [SLOW TEST:5.673 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2252,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:07:45.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 13:07:45.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5895' Jul 1 13:07:45.495: INFO: stderr: "" Jul 1 13:07:45.495: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jul 1 13:07:50.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5895 -o json' Jul 1 13:07:50.635: INFO: stderr: "" Jul 1 13:07:50.635: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-01T13:07:45Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5895\",\n \"resourceVersion\": \"28785068\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5895/pods/e2e-test-httpd-pod\",\n \"uid\": \"63fc412e-0e57-4fb0-bf85-016c4aa9bb94\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-cctt2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-cctt2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-cctt2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T13:07:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T13:07:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T13:07:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T13:07:45Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://533313d2a81f3011d9f8b013d2bfd0c8e092738e81e8f89a22a0a76c071f5757\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-01T13:07:48Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.9\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.9\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-01T13:07:45Z\"\n }\n}\n" STEP: replace the image in the pod Jul 1 13:07:50.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5895' Jul 1 13:07:50.972: INFO: stderr: "" Jul 1 13:07:50.972: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Jul 1 13:07:50.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5895' Jul 1 13:07:54.046: INFO: stderr: "" Jul 1 13:07:54.046: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:07:54.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5895" for this suite. • [SLOW TEST:9.004 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":136,"skipped":2261,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:07:54.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9423 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9423 STEP: creating replication controller externalsvc in namespace services-9423 I0701 13:07:54.671770 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9423, replica count: 2 I0701 13:07:57.722101 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:08:00.722302 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 1 13:08:00.762: INFO: Creating new exec pod Jul 1 13:08:06.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9423 execpodk5qll -- /bin/sh -x -c nslookup clusterip-service' Jul 1 13:08:08.472: INFO: stderr: "I0701 13:08:06.976828 1421 log.go:172] (0xc000a0c000) (0xc000a86000) Create stream\nI0701 13:08:06.976876 1421 log.go:172] (0xc000a0c000) (0xc000a86000) Stream added, broadcasting: 1\nI0701 13:08:06.979396 1421 log.go:172] (0xc000a0c000) Reply frame received for 1\nI0701 13:08:06.979425 1421 log.go:172] (0xc000a0c000) (0xc0004f9e00) Create stream\nI0701 13:08:06.979432 1421 log.go:172] (0xc000a0c000) (0xc0004f9e00) Stream added, broadcasting: 3\nI0701 13:08:06.980227 1421 log.go:172] (0xc000a0c000) Reply frame received for 3\nI0701 13:08:06.980254 1421 log.go:172] (0xc000a0c000) (0xc000a860a0) Create stream\nI0701 13:08:06.980262 1421 log.go:172] (0xc000a0c000) (0xc000a860a0) Stream added, broadcasting: 5\nI0701 13:08:06.980851 1421 log.go:172] (0xc000a0c000) Reply frame received for 5\nI0701 13:08:07.427640 1421 log.go:172] (0xc000a0c000) Data frame received for 5\nI0701 13:08:07.427662 1421 log.go:172] (0xc000a860a0) (5) Data frame handling\nI0701 13:08:07.427672 1421 log.go:172] (0xc000a860a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0701 13:08:08.464289 1421 log.go:172] (0xc000a0c000) Data frame received for 3\nI0701 13:08:08.464315 1421 log.go:172] (0xc0004f9e00) (3) Data frame handling\nI0701 13:08:08.464337 1421 log.go:172] (0xc0004f9e00) (3) Data frame sent\nI0701 13:08:08.465651 1421 log.go:172] (0xc000a0c000) Data frame received for 3\nI0701 13:08:08.465666 1421 log.go:172] (0xc0004f9e00) (3) Data frame handling\nI0701 13:08:08.465678 1421 log.go:172] (0xc0004f9e00) (3) Data frame sent\nI0701 13:08:08.466215 1421 log.go:172] (0xc000a0c000) Data frame received for 5\nI0701 13:08:08.466234 1421 log.go:172] (0xc000a860a0) (5) Data frame handling\nI0701 13:08:08.466456 1421 log.go:172] (0xc000a0c000) Data frame received for 3\nI0701 13:08:08.466479 1421 log.go:172] (0xc0004f9e00) (3) Data frame handling\nI0701 13:08:08.468108 1421 log.go:172] (0xc000a0c000) Data frame received for 1\nI0701 13:08:08.468123 1421 log.go:172] (0xc000a86000) (1) Data frame handling\nI0701 13:08:08.468132 1421 log.go:172] (0xc000a86000) (1) Data frame sent\nI0701 13:08:08.468144 1421 log.go:172] (0xc000a0c000) (0xc000a86000) Stream removed, broadcasting: 1\nI0701 13:08:08.468162 1421 log.go:172] (0xc000a0c000) Go away received\nI0701 13:08:08.468524 1421 log.go:172] (0xc000a0c000) (0xc000a86000) Stream removed, broadcasting: 1\nI0701 13:08:08.468556 1421 log.go:172] (0xc000a0c000) (0xc0004f9e00) Stream removed, broadcasting: 3\nI0701 13:08:08.468577 1421 log.go:172] (0xc000a0c000) (0xc000a860a0) Stream removed, broadcasting: 5\n" Jul 1 13:08:08.472: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9423.svc.cluster.local\tcanonical name = externalsvc.services-9423.svc.cluster.local.\nName:\texternalsvc.services-9423.svc.cluster.local\nAddress: 10.104.254.196\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9423, will wait for the garbage collector to delete the pods Jul 1 13:08:08.582: INFO: Deleting ReplicationController externalsvc took: 5.496319ms Jul 1 13:08:08.882: INFO: Terminating ReplicationController externalsvc pods took: 300.292117ms Jul 1 13:08:19.504: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:08:19.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9423" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.456 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":137,"skipped":2267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:08:19.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jul 1 13:08:19.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6265' Jul 1 13:08:19.927: INFO: stderr: "" Jul 1 13:08:19.927: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 1 13:08:20.932: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:20.932: INFO: Found 0 / 1 Jul 1 13:08:21.931: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:21.931: INFO: Found 0 / 1 Jul 1 13:08:22.930: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:22.931: INFO: Found 0 / 1 Jul 1 13:08:26.524: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:26.524: INFO: Found 0 / 1 Jul 1 13:08:26.933: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:26.933: INFO: Found 0 / 1 Jul 1 13:08:27.931: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:27.931: INFO: Found 0 / 1 Jul 1 13:08:28.975: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:28.975: INFO: Found 1 / 1 Jul 1 13:08:28.975: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 1 13:08:28.979: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:28.979: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 13:08:28.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-6tnz4 --namespace=kubectl-6265 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 1 13:08:29.083: INFO: stderr: "" Jul 1 13:08:29.083: INFO: stdout: "pod/agnhost-master-6tnz4 patched\n" STEP: checking annotations Jul 1 13:08:29.107: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:08:29.107: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:08:29.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6265" for this suite. • [SLOW TEST:9.546 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":138,"skipped":2292,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:08:29.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 13:08:29.165: INFO: Waiting up to 5m0s for pod "pod-b47121e8-981d-413e-aaee-2db0c4be5b43" in namespace "emptydir-2566" to be "success or failure" Jul 1 13:08:29.171: INFO: Pod "pod-b47121e8-981d-413e-aaee-2db0c4be5b43": Phase="Pending", Reason="", readiness=false. Elapsed: 5.838692ms Jul 1 13:08:31.245: INFO: Pod "pod-b47121e8-981d-413e-aaee-2db0c4be5b43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079313673s Jul 1 13:08:33.256: INFO: Pod "pod-b47121e8-981d-413e-aaee-2db0c4be5b43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090172924s STEP: Saw pod success Jul 1 13:08:33.256: INFO: Pod "pod-b47121e8-981d-413e-aaee-2db0c4be5b43" satisfied condition "success or failure" Jul 1 13:08:33.259: INFO: Trying to get logs from node jerma-worker2 pod pod-b47121e8-981d-413e-aaee-2db0c4be5b43 container test-container: STEP: delete the pod Jul 1 13:08:33.364: INFO: Waiting for pod pod-b47121e8-981d-413e-aaee-2db0c4be5b43 to disappear Jul 1 13:08:33.369: INFO: Pod pod-b47121e8-981d-413e-aaee-2db0c4be5b43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:08:33.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2566" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2300,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:08:33.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-p5zk STEP: Creating a pod to test atomic-volume-subpath Jul 1 13:08:33.835: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-p5zk" in namespace "subpath-1175" to be "success or failure" Jul 1 13:08:33.873: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Pending", Reason="", readiness=false. Elapsed: 37.677561ms Jul 1 13:08:35.877: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042113327s Jul 1 13:08:37.881: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 4.045393565s Jul 1 13:08:39.891: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 6.056117001s Jul 1 13:08:41.895: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 8.059917597s Jul 1 13:08:43.902: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 10.066408663s Jul 1 13:08:45.906: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 12.070940299s Jul 1 13:08:47.911: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 14.07590997s Jul 1 13:08:49.921: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 16.08595769s Jul 1 13:08:51.926: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 18.090420401s Jul 1 13:08:53.931: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 20.095393789s Jul 1 13:08:55.935: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Running", Reason="", readiness=true. Elapsed: 22.099990596s Jul 1 13:08:57.961: INFO: Pod "pod-subpath-test-downwardapi-p5zk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.125365597s STEP: Saw pod success Jul 1 13:08:57.961: INFO: Pod "pod-subpath-test-downwardapi-p5zk" satisfied condition "success or failure" Jul 1 13:08:57.966: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-p5zk container test-container-subpath-downwardapi-p5zk: STEP: delete the pod Jul 1 13:08:57.991: INFO: Waiting for pod pod-subpath-test-downwardapi-p5zk to disappear Jul 1 13:08:57.996: INFO: Pod pod-subpath-test-downwardapi-p5zk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-p5zk Jul 1 13:08:57.996: INFO: Deleting pod "pod-subpath-test-downwardapi-p5zk" in namespace "subpath-1175" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:08:57.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1175" for this suite. • [SLOW TEST:24.627 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":140,"skipped":2305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:08:58.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 1 13:08:58.083: INFO: Waiting up to 5m0s for pod "pod-8ddd5940-df17-4d8e-8ee7-b818674b466d" in namespace "emptydir-2448" to be "success or failure" Jul 1 13:08:58.086: INFO: Pod "pod-8ddd5940-df17-4d8e-8ee7-b818674b466d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.348963ms Jul 1 13:09:00.179: INFO: Pod "pod-8ddd5940-df17-4d8e-8ee7-b818674b466d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096032823s Jul 1 13:09:02.183: INFO: Pod "pod-8ddd5940-df17-4d8e-8ee7-b818674b466d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100425361s STEP: Saw pod success Jul 1 13:09:02.183: INFO: Pod "pod-8ddd5940-df17-4d8e-8ee7-b818674b466d" satisfied condition "success or failure" Jul 1 13:09:02.186: INFO: Trying to get logs from node jerma-worker2 pod pod-8ddd5940-df17-4d8e-8ee7-b818674b466d container test-container: STEP: delete the pod Jul 1 13:09:02.247: INFO: Waiting for pod pod-8ddd5940-df17-4d8e-8ee7-b818674b466d to disappear Jul 1 13:09:02.301: INFO: Pod pod-8ddd5940-df17-4d8e-8ee7-b818674b466d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:09:02.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2448" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:09:02.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:09:22.451: INFO: Container started at 2020-07-01 13:09:05 +0000 UTC, pod became ready at 2020-07-01 13:09:20 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:09:22.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1714" for this suite. • [SLOW TEST:20.151 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2366,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:09:22.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jul 1 13:09:27.113: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-961 pod-service-account-1cfa3f0c-0743-479a-b664-2bbb46e8a06a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 1 13:09:27.335: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-961 pod-service-account-1cfa3f0c-0743-479a-b664-2bbb46e8a06a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 1 13:09:27.625: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-961 pod-service-account-1cfa3f0c-0743-479a-b664-2bbb46e8a06a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:09:27.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-961" for this suite. • [SLOW TEST:5.514 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":143,"skipped":2382,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:09:27.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:09:28.052: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:09:28.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2156" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":144,"skipped":2386,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:09:28.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-944 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-944 STEP: Deleting pre-stop pod Jul 1 13:09:41.876: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:09:41.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-944" for this suite. • [SLOW TEST:13.160 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":145,"skipped":2387,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:09:41.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:09:41.942: INFO: Creating ReplicaSet my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a Jul 1 13:09:41.958: INFO: Pod name my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a: Found 0 pods out of 1 Jul 1 13:09:46.979: INFO: Pod name my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a: Found 1 pods out of 1 Jul 1 13:09:46.980: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a" is running Jul 1 13:09:47.006: INFO: Pod "my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a-ph8b7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 13:09:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 13:09:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 13:09:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 13:09:41 +0000 UTC Reason: Message:}]) Jul 1 13:09:47.006: INFO: Trying to dial the pod Jul 1 13:09:52.017: INFO: Controller my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a: Got expected result from replica 1 [my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a-ph8b7]: "my-hostname-basic-ba552a30-75b1-4b5f-8d62-07f40dc83a1a-ph8b7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:09:52.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1694" for this suite. • [SLOW TEST:10.130 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":146,"skipped":2395,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:09:52.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-db1adf8f-c5d3-47cb-ac9f-afe920d17900 STEP: Creating a pod to test consume secrets Jul 1 13:09:52.145: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368" in namespace "projected-1768" to be "success or failure" Jul 1 13:09:52.154: INFO: Pod "pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368": Phase="Pending", Reason="", readiness=false. Elapsed: 8.660005ms Jul 1 13:09:54.157: INFO: Pod "pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012600376s Jul 1 13:09:56.162: INFO: Pod "pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016985928s STEP: Saw pod success Jul 1 13:09:56.162: INFO: Pod "pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368" satisfied condition "success or failure" Jul 1 13:09:56.165: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368 container projected-secret-volume-test: STEP: delete the pod Jul 1 13:09:56.186: INFO: Waiting for pod pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368 to disappear Jul 1 13:09:56.201: INFO: Pod pod-projected-secrets-c3c3b0c5-0bed-4bbf-8c85-1e6bb3e8d368 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:09:56.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1768" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:09:56.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5502 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5502 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5502 Jul 1 13:09:56.285: INFO: Found 0 stateful pods, waiting for 1 Jul 1 13:10:06.290: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 1 13:10:06.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 13:10:06.606: INFO: stderr: "I0701 13:10:06.433080 1542 log.go:172] (0xc00010ebb0) (0xc0006959a0) Create stream\nI0701 13:10:06.433303 1542 log.go:172] (0xc00010ebb0) (0xc0006959a0) Stream added, broadcasting: 1\nI0701 13:10:06.436069 1542 log.go:172] (0xc00010ebb0) Reply frame received for 1\nI0701 13:10:06.436117 1542 log.go:172] (0xc00010ebb0) (0xc000024000) Create stream\nI0701 13:10:06.436131 1542 log.go:172] (0xc00010ebb0) (0xc000024000) Stream added, broadcasting: 3\nI0701 13:10:06.436985 1542 log.go:172] (0xc00010ebb0) Reply frame received for 3\nI0701 13:10:06.437022 1542 log.go:172] (0xc00010ebb0) (0xc000026000) Create stream\nI0701 13:10:06.437033 1542 log.go:172] (0xc00010ebb0) (0xc000026000) Stream added, broadcasting: 5\nI0701 13:10:06.438399 1542 log.go:172] (0xc00010ebb0) Reply frame received for 5\nI0701 13:10:06.535656 1542 log.go:172] (0xc00010ebb0) Data frame received for 5\nI0701 13:10:06.535698 1542 log.go:172] (0xc000026000) (5) Data frame handling\nI0701 13:10:06.535721 1542 log.go:172] (0xc000026000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 13:10:06.594852 1542 log.go:172] (0xc00010ebb0) Data frame received for 5\nI0701 13:10:06.594889 1542 log.go:172] (0xc000026000) (5) Data frame handling\nI0701 13:10:06.594941 1542 log.go:172] (0xc00010ebb0) Data frame received for 3\nI0701 13:10:06.594970 1542 log.go:172] (0xc000024000) (3) Data frame handling\nI0701 13:10:06.595003 1542 log.go:172] (0xc000024000) (3) Data frame sent\nI0701 13:10:06.595020 1542 log.go:172] (0xc00010ebb0) Data frame received for 3\nI0701 13:10:06.595044 1542 log.go:172] (0xc000024000) (3) Data frame handling\nI0701 13:10:06.597666 1542 log.go:172] (0xc00010ebb0) Data frame received for 1\nI0701 13:10:06.597779 1542 log.go:172] (0xc0006959a0) (1) Data frame handling\nI0701 13:10:06.597828 1542 log.go:172] (0xc0006959a0) (1) Data frame sent\nI0701 13:10:06.597854 1542 log.go:172] (0xc00010ebb0) (0xc0006959a0) Stream removed, broadcasting: 1\nI0701 13:10:06.597882 1542 log.go:172] (0xc00010ebb0) Go away received\nI0701 13:10:06.598360 1542 log.go:172] (0xc00010ebb0) (0xc0006959a0) Stream removed, broadcasting: 1\nI0701 13:10:06.598384 1542 log.go:172] (0xc00010ebb0) (0xc000024000) Stream removed, broadcasting: 3\nI0701 13:10:06.598397 1542 log.go:172] (0xc00010ebb0) (0xc000026000) Stream removed, broadcasting: 5\n" Jul 1 13:10:06.606: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 13:10:06.606: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 13:10:06.641: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 1 13:10:16.646: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 13:10:16.646: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 13:10:16.684: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:16.684: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC }] Jul 1 13:10:16.684: INFO: Jul 1 13:10:16.684: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 1 13:10:17.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99299064s Jul 1 13:10:18.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988320794s Jul 1 13:10:19.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.861615234s Jul 1 13:10:20.956: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.766063737s Jul 1 13:10:21.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.72160411s Jul 1 13:10:22.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.717276341s Jul 1 13:10:23.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.713396539s Jul 1 13:10:24.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.708048949s Jul 1 13:10:25.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 703.826037ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5502 Jul 1 13:10:26.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:10:27.228: INFO: stderr: "I0701 13:10:27.137314 1562 log.go:172] (0xc000752a50) (0xc00079a000) Create stream\nI0701 13:10:27.137395 1562 log.go:172] (0xc000752a50) (0xc00079a000) Stream added, broadcasting: 1\nI0701 13:10:27.140853 1562 log.go:172] (0xc000752a50) Reply frame received for 1\nI0701 13:10:27.140895 1562 log.go:172] (0xc000752a50) (0xc0006279a0) Create stream\nI0701 13:10:27.140906 1562 log.go:172] (0xc000752a50) (0xc0006279a0) Stream added, broadcasting: 3\nI0701 13:10:27.142511 1562 log.go:172] (0xc000752a50) Reply frame received for 3\nI0701 13:10:27.142584 1562 log.go:172] (0xc000752a50) (0xc000220000) Create stream\nI0701 13:10:27.142641 1562 log.go:172] (0xc000752a50) (0xc000220000) Stream added, broadcasting: 5\nI0701 13:10:27.145087 1562 log.go:172] (0xc000752a50) Reply frame received for 5\nI0701 13:10:27.220976 1562 log.go:172] (0xc000752a50) Data frame received for 3\nI0701 13:10:27.221000 1562 log.go:172] (0xc0006279a0) (3) Data frame handling\nI0701 13:10:27.221008 1562 log.go:172] (0xc0006279a0) (3) Data frame sent\nI0701 13:10:27.221031 1562 log.go:172] (0xc000752a50) Data frame received for 3\nI0701 13:10:27.221042 1562 log.go:172] (0xc0006279a0) (3) Data frame handling\nI0701 13:10:27.221053 1562 log.go:172] (0xc000752a50) Data frame received for 5\nI0701 13:10:27.221061 1562 log.go:172] (0xc000220000) (5) Data frame handling\nI0701 13:10:27.221069 1562 log.go:172] (0xc000220000) (5) Data frame sent\nI0701 13:10:27.221076 1562 log.go:172] (0xc000752a50) Data frame received for 5\nI0701 13:10:27.221082 1562 log.go:172] (0xc000220000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 13:10:27.222337 1562 log.go:172] (0xc000752a50) Data frame received for 1\nI0701 13:10:27.222358 1562 log.go:172] (0xc00079a000) (1) Data frame handling\nI0701 13:10:27.222370 1562 log.go:172] (0xc00079a000) (1) Data frame sent\nI0701 13:10:27.222386 1562 log.go:172] (0xc000752a50) (0xc00079a000) Stream removed, broadcasting: 1\nI0701 13:10:27.222403 1562 log.go:172] (0xc000752a50) Go away received\nI0701 13:10:27.222790 1562 log.go:172] (0xc000752a50) (0xc00079a000) Stream removed, broadcasting: 1\nI0701 13:10:27.222809 1562 log.go:172] (0xc000752a50) (0xc0006279a0) Stream removed, broadcasting: 3\nI0701 13:10:27.222820 1562 log.go:172] (0xc000752a50) (0xc000220000) Stream removed, broadcasting: 5\n" Jul 1 13:10:27.228: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 13:10:27.228: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 13:10:27.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:10:27.590: INFO: stderr: "I0701 13:10:27.376811 1583 log.go:172] (0xc000104b00) (0xc0005fbd60) Create stream\nI0701 13:10:27.376870 1583 log.go:172] (0xc000104b00) (0xc0005fbd60) Stream added, broadcasting: 1\nI0701 13:10:27.379465 1583 log.go:172] (0xc000104b00) Reply frame received for 1\nI0701 13:10:27.379495 1583 log.go:172] (0xc000104b00) (0xc0004fe780) Create stream\nI0701 13:10:27.379504 1583 log.go:172] (0xc000104b00) (0xc0004fe780) Stream added, broadcasting: 3\nI0701 13:10:27.380365 1583 log.go:172] (0xc000104b00) Reply frame received for 3\nI0701 13:10:27.380397 1583 log.go:172] (0xc000104b00) (0xc0005fbe00) Create stream\nI0701 13:10:27.380410 1583 log.go:172] (0xc000104b00) (0xc0005fbe00) Stream added, broadcasting: 5\nI0701 13:10:27.381538 1583 log.go:172] (0xc000104b00) Reply frame received for 5\nI0701 13:10:27.583272 1583 log.go:172] (0xc000104b00) Data frame received for 3\nI0701 13:10:27.583299 1583 log.go:172] (0xc0004fe780) (3) Data frame handling\nI0701 13:10:27.583311 1583 log.go:172] (0xc0004fe780) (3) Data frame sent\nI0701 13:10:27.583320 1583 log.go:172] (0xc000104b00) Data frame received for 3\nI0701 13:10:27.583329 1583 log.go:172] (0xc0004fe780) (3) Data frame handling\nI0701 13:10:27.583360 1583 log.go:172] (0xc000104b00) Data frame received for 5\nI0701 13:10:27.583372 1583 log.go:172] (0xc0005fbe00) (5) Data frame handling\nI0701 13:10:27.583391 1583 log.go:172] (0xc0005fbe00) (5) Data frame sent\nI0701 13:10:27.583403 1583 log.go:172] (0xc000104b00) Data frame received for 5\nI0701 13:10:27.583413 1583 log.go:172] (0xc0005fbe00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0701 13:10:27.584339 1583 log.go:172] (0xc000104b00) Data frame received for 1\nI0701 13:10:27.584352 1583 log.go:172] (0xc0005fbd60) (1) Data frame handling\nI0701 13:10:27.584362 1583 log.go:172] (0xc0005fbd60) (1) Data frame sent\nI0701 13:10:27.584370 1583 log.go:172] (0xc000104b00) (0xc0005fbd60) Stream removed, broadcasting: 1\nI0701 13:10:27.584535 1583 log.go:172] (0xc000104b00) Go away received\nI0701 13:10:27.584612 1583 log.go:172] (0xc000104b00) (0xc0005fbd60) Stream removed, broadcasting: 1\nI0701 13:10:27.584625 1583 log.go:172] (0xc000104b00) (0xc0004fe780) Stream removed, broadcasting: 3\nI0701 13:10:27.584631 1583 log.go:172] (0xc000104b00) (0xc0005fbe00) Stream removed, broadcasting: 5\n" Jul 1 13:10:27.590: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 13:10:27.590: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 13:10:27.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:10:27.780: INFO: stderr: "I0701 13:10:27.713059 1605 log.go:172] (0xc000a566e0) (0xc0008e4000) Create stream\nI0701 13:10:27.713266 1605 log.go:172] (0xc000a566e0) (0xc0008e4000) Stream added, broadcasting: 1\nI0701 13:10:27.715255 1605 log.go:172] (0xc000a566e0) Reply frame received for 1\nI0701 13:10:27.715302 1605 log.go:172] (0xc000a566e0) (0xc00064fa40) Create stream\nI0701 13:10:27.715327 1605 log.go:172] (0xc000a566e0) (0xc00064fa40) Stream added, broadcasting: 3\nI0701 13:10:27.716035 1605 log.go:172] (0xc000a566e0) Reply frame received for 3\nI0701 13:10:27.716057 1605 log.go:172] (0xc000a566e0) (0xc0008e40a0) Create stream\nI0701 13:10:27.716065 1605 log.go:172] (0xc000a566e0) (0xc0008e40a0) Stream added, broadcasting: 5\nI0701 13:10:27.716928 1605 log.go:172] (0xc000a566e0) Reply frame received for 5\nI0701 13:10:27.774269 1605 log.go:172] (0xc000a566e0) Data frame received for 3\nI0701 13:10:27.774346 1605 log.go:172] (0xc00064fa40) (3) Data frame handling\nI0701 13:10:27.774370 1605 log.go:172] (0xc00064fa40) (3) Data frame sent\nI0701 13:10:27.774433 1605 log.go:172] (0xc000a566e0) Data frame received for 3\nI0701 13:10:27.774448 1605 log.go:172] (0xc00064fa40) (3) Data frame handling\nI0701 13:10:27.774468 1605 log.go:172] (0xc000a566e0) Data frame received for 5\nI0701 13:10:27.774487 1605 log.go:172] (0xc0008e40a0) (5) Data frame handling\nI0701 13:10:27.774504 1605 log.go:172] (0xc0008e40a0) (5) Data frame sent\nI0701 13:10:27.774513 1605 log.go:172] (0xc000a566e0) Data frame received for 5\nI0701 13:10:27.774530 1605 log.go:172] (0xc0008e40a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0701 13:10:27.775736 1605 log.go:172] (0xc000a566e0) Data frame received for 1\nI0701 13:10:27.775797 1605 log.go:172] (0xc0008e4000) (1) Data frame handling\nI0701 13:10:27.775847 1605 log.go:172] (0xc0008e4000) (1) Data frame sent\nI0701 13:10:27.775880 1605 log.go:172] (0xc000a566e0) (0xc0008e4000) Stream removed, broadcasting: 1\nI0701 13:10:27.775915 1605 log.go:172] (0xc000a566e0) Go away received\nI0701 13:10:27.776534 1605 log.go:172] (0xc000a566e0) (0xc0008e4000) Stream removed, broadcasting: 1\nI0701 13:10:27.776557 1605 log.go:172] (0xc000a566e0) (0xc00064fa40) Stream removed, broadcasting: 3\nI0701 13:10:27.776569 1605 log.go:172] (0xc000a566e0) (0xc0008e40a0) Stream removed, broadcasting: 5\n" Jul 1 13:10:27.780: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 13:10:27.780: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 13:10:27.784: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 13:10:27.784: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 13:10:27.785: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 1 13:10:27.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 13:10:28.025: INFO: stderr: "I0701 13:10:27.917986 1625 log.go:172] (0xc000104370) (0xc0006015e0) Create stream\nI0701 13:10:27.918065 1625 log.go:172] (0xc000104370) (0xc0006015e0) Stream added, broadcasting: 1\nI0701 13:10:27.920804 1625 log.go:172] (0xc000104370) Reply frame received for 1\nI0701 13:10:27.920855 1625 log.go:172] (0xc000104370) (0xc000942000) Create stream\nI0701 13:10:27.920880 1625 log.go:172] (0xc000104370) (0xc000942000) Stream added, broadcasting: 3\nI0701 13:10:27.922211 1625 log.go:172] (0xc000104370) Reply frame received for 3\nI0701 13:10:27.922262 1625 log.go:172] (0xc000104370) (0xc0009420a0) Create stream\nI0701 13:10:27.922285 1625 log.go:172] (0xc000104370) (0xc0009420a0) Stream added, broadcasting: 5\nI0701 13:10:27.923425 1625 log.go:172] (0xc000104370) Reply frame received for 5\nI0701 13:10:28.017497 1625 log.go:172] (0xc000104370) Data frame received for 3\nI0701 13:10:28.017557 1625 log.go:172] (0xc000104370) Data frame received for 5\nI0701 13:10:28.017597 1625 log.go:172] (0xc0009420a0) (5) Data frame handling\nI0701 13:10:28.017694 1625 log.go:172] (0xc0009420a0) (5) Data frame sent\nI0701 13:10:28.017757 1625 log.go:172] (0xc000104370) Data frame received for 5\nI0701 13:10:28.017786 1625 log.go:172] (0xc0009420a0) (5) Data frame handling\nI0701 13:10:28.017834 1625 log.go:172] (0xc000942000) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 13:10:28.017876 1625 log.go:172] (0xc000942000) (3) Data frame sent\nI0701 13:10:28.017915 1625 log.go:172] (0xc000104370) Data frame received for 3\nI0701 13:10:28.017927 1625 log.go:172] (0xc000942000) (3) Data frame handling\nI0701 13:10:28.020076 1625 log.go:172] (0xc000104370) Data frame received for 1\nI0701 13:10:28.020105 1625 log.go:172] (0xc0006015e0) (1) Data frame handling\nI0701 13:10:28.020123 1625 log.go:172] (0xc0006015e0) (1) Data frame sent\nI0701 13:10:28.020137 1625 log.go:172] (0xc000104370) (0xc0006015e0) Stream removed, broadcasting: 1\nI0701 13:10:28.020148 1625 log.go:172] (0xc000104370) Go away received\nI0701 13:10:28.020476 1625 log.go:172] (0xc000104370) (0xc0006015e0) Stream removed, broadcasting: 1\nI0701 13:10:28.020544 1625 log.go:172] (0xc000104370) (0xc000942000) Stream removed, broadcasting: 3\nI0701 13:10:28.020579 1625 log.go:172] (0xc000104370) (0xc0009420a0) Stream removed, broadcasting: 5\n" Jul 1 13:10:28.025: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 13:10:28.025: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 13:10:28.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 13:10:28.263: INFO: stderr: "I0701 13:10:28.150449 1646 log.go:172] (0xc0008e40b0) (0xc00074ef00) Create stream\nI0701 13:10:28.150517 1646 log.go:172] (0xc0008e40b0) (0xc00074ef00) Stream added, broadcasting: 1\nI0701 13:10:28.153360 1646 log.go:172] (0xc0008e40b0) Reply frame received for 1\nI0701 13:10:28.153441 1646 log.go:172] (0xc0008e40b0) (0xc000918000) Create stream\nI0701 13:10:28.153470 1646 log.go:172] (0xc0008e40b0) (0xc000918000) Stream added, broadcasting: 3\nI0701 13:10:28.154810 1646 log.go:172] (0xc0008e40b0) Reply frame received for 3\nI0701 13:10:28.154852 1646 log.go:172] (0xc0008e40b0) (0xc000639b80) Create stream\nI0701 13:10:28.154866 1646 log.go:172] (0xc0008e40b0) (0xc000639b80) Stream added, broadcasting: 5\nI0701 13:10:28.155921 1646 log.go:172] (0xc0008e40b0) Reply frame received for 5\nI0701 13:10:28.225767 1646 log.go:172] (0xc0008e40b0) Data frame received for 5\nI0701 13:10:28.225793 1646 log.go:172] (0xc000639b80) (5) Data frame handling\nI0701 13:10:28.225811 1646 log.go:172] (0xc000639b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 13:10:28.253791 1646 log.go:172] (0xc0008e40b0) Data frame received for 3\nI0701 13:10:28.253825 1646 log.go:172] (0xc000918000) (3) Data frame handling\nI0701 13:10:28.253838 1646 log.go:172] (0xc000918000) (3) Data frame sent\nI0701 13:10:28.253847 1646 log.go:172] (0xc0008e40b0) Data frame received for 3\nI0701 13:10:28.253888 1646 log.go:172] (0xc000918000) (3) Data frame handling\nI0701 13:10:28.254780 1646 log.go:172] (0xc0008e40b0) Data frame received for 5\nI0701 13:10:28.254802 1646 log.go:172] (0xc000639b80) (5) Data frame handling\nI0701 13:10:28.256546 1646 log.go:172] (0xc0008e40b0) Data frame received for 1\nI0701 13:10:28.256568 1646 log.go:172] (0xc00074ef00) (1) Data frame handling\nI0701 13:10:28.256575 1646 log.go:172] (0xc00074ef00) (1) Data frame sent\nI0701 13:10:28.256584 1646 log.go:172] (0xc0008e40b0) (0xc00074ef00) Stream removed, broadcasting: 1\nI0701 13:10:28.256627 1646 log.go:172] (0xc0008e40b0) Go away received\nI0701 13:10:28.256826 1646 log.go:172] (0xc0008e40b0) (0xc00074ef00) Stream removed, broadcasting: 1\nI0701 13:10:28.256838 1646 log.go:172] (0xc0008e40b0) (0xc000918000) Stream removed, broadcasting: 3\nI0701 13:10:28.256844 1646 log.go:172] (0xc0008e40b0) (0xc000639b80) Stream removed, broadcasting: 5\n" Jul 1 13:10:28.263: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 13:10:28.263: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 13:10:28.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 13:10:28.524: INFO: stderr: "I0701 13:10:28.395638 1667 log.go:172] (0xc0008b66e0) (0xc000afe000) Create stream\nI0701 13:10:28.395716 1667 log.go:172] (0xc0008b66e0) (0xc000afe000) Stream added, broadcasting: 1\nI0701 13:10:28.398562 1667 log.go:172] (0xc0008b66e0) Reply frame received for 1\nI0701 13:10:28.398629 1667 log.go:172] (0xc0008b66e0) (0xc0006d5ae0) Create stream\nI0701 13:10:28.398647 1667 log.go:172] (0xc0008b66e0) (0xc0006d5ae0) Stream added, broadcasting: 3\nI0701 13:10:28.399674 1667 log.go:172] (0xc0008b66e0) Reply frame received for 3\nI0701 13:10:28.399707 1667 log.go:172] (0xc0008b66e0) (0xc000226000) Create stream\nI0701 13:10:28.399717 1667 log.go:172] (0xc0008b66e0) (0xc000226000) Stream added, broadcasting: 5\nI0701 13:10:28.400870 1667 log.go:172] (0xc0008b66e0) Reply frame received for 5\nI0701 13:10:28.467366 1667 log.go:172] (0xc0008b66e0) Data frame received for 5\nI0701 13:10:28.467390 1667 log.go:172] (0xc000226000) (5) Data frame handling\nI0701 13:10:28.467405 1667 log.go:172] (0xc000226000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 13:10:28.515778 1667 log.go:172] (0xc0008b66e0) Data frame received for 3\nI0701 13:10:28.515827 1667 log.go:172] (0xc0006d5ae0) (3) Data frame handling\nI0701 13:10:28.515863 1667 log.go:172] (0xc0006d5ae0) (3) Data frame sent\nI0701 13:10:28.516137 1667 log.go:172] (0xc0008b66e0) Data frame received for 5\nI0701 13:10:28.516150 1667 log.go:172] (0xc000226000) (5) Data frame handling\nI0701 13:10:28.516175 1667 log.go:172] (0xc0008b66e0) Data frame received for 3\nI0701 13:10:28.516215 1667 log.go:172] (0xc0006d5ae0) (3) Data frame handling\nI0701 13:10:28.518062 1667 log.go:172] (0xc0008b66e0) Data frame received for 1\nI0701 13:10:28.518081 1667 log.go:172] (0xc000afe000) (1) Data frame handling\nI0701 13:10:28.518097 1667 log.go:172] (0xc000afe000) (1) Data frame sent\nI0701 13:10:28.518123 1667 log.go:172] (0xc0008b66e0) (0xc000afe000) Stream removed, broadcasting: 1\nI0701 13:10:28.518234 1667 log.go:172] (0xc0008b66e0) Go away received\nI0701 13:10:28.518436 1667 log.go:172] (0xc0008b66e0) (0xc000afe000) Stream removed, broadcasting: 1\nI0701 13:10:28.518448 1667 log.go:172] (0xc0008b66e0) (0xc0006d5ae0) Stream removed, broadcasting: 3\nI0701 13:10:28.518454 1667 log.go:172] (0xc0008b66e0) (0xc000226000) Stream removed, broadcasting: 5\n" Jul 1 13:10:28.525: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 13:10:28.525: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 13:10:28.525: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 13:10:28.528: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 1 13:10:38.535: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 13:10:38.535: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 1 13:10:38.535: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 1 13:10:38.581: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:38.581: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC }] Jul 1 13:10:38.581: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:38.581: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:38.581: INFO: Jul 1 13:10:38.581: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 13:10:39.600: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:39.600: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC }] Jul 1 13:10:39.600: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:39.600: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:39.600: INFO: Jul 1 13:10:39.600: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 13:10:40.605: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:40.605: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:09:56 +0000 UTC }] Jul 1 13:10:40.606: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:40.606: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:40.606: INFO: Jul 1 13:10:40.606: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 13:10:41.609: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:41.609: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:41.609: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:41.609: INFO: Jul 1 13:10:41.609: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 13:10:42.613: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:42.614: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:42.614: INFO: Jul 1 13:10:42.614: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 13:10:43.618: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:43.618: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:43.619: INFO: Jul 1 13:10:43.619: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 13:10:44.623: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:44.623: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:44.623: INFO: Jul 1 13:10:44.623: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 13:10:45.627: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:45.627: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:45.628: INFO: Jul 1 13:10:45.628: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 13:10:46.632: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:46.632: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:46.632: INFO: Jul 1 13:10:46.632: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 13:10:47.637: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 13:10:47.637: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 13:10:16 +0000 UTC }] Jul 1 13:10:47.637: INFO: Jul 1 13:10:47.637: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5502 Jul 1 13:10:48.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:10:48.795: INFO: rc: 1 Jul 1 13:10:48.795: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jul 1 13:10:58.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:10:58.899: INFO: rc: 1 Jul 1 13:10:58.899: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:11:08.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:11:09.002: INFO: rc: 1 Jul 1 13:11:09.002: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:11:19.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:11:19.114: INFO: rc: 1 Jul 1 13:11:19.114: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:11:29.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:11:29.208: INFO: rc: 1 Jul 1 13:11:29.208: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:11:39.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:11:39.313: INFO: rc: 1 Jul 1 13:11:39.313: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:11:49.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:11:49.419: INFO: rc: 1 Jul 1 13:11:49.419: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:11:59.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:11:59.516: INFO: rc: 1 Jul 1 13:11:59.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:12:09.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:12:09.614: INFO: rc: 1 Jul 1 13:12:09.614: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:12:19.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:12:19.711: INFO: rc: 1 Jul 1 13:12:19.711: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:12:29.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:12:29.830: INFO: rc: 1 Jul 1 13:12:29.830: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:12:39.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:12:39.975: INFO: rc: 1 Jul 1 13:12:39.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:12:49.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:12:50.076: INFO: rc: 1 Jul 1 13:12:50.076: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:13:00.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:13:00.186: INFO: rc: 1 Jul 1 13:13:00.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:13:10.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:13:10.297: INFO: rc: 1 Jul 1 13:13:10.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:13:20.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:13:20.439: INFO: rc: 1 Jul 1 13:13:20.439: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:13:30.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:13:30.545: INFO: rc: 1 Jul 1 13:13:30.545: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:13:40.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:13:40.652: INFO: rc: 1 Jul 1 13:13:40.652: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:13:50.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:13:50.758: INFO: rc: 1 Jul 1 13:13:50.758: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:14:00.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:14:00.867: INFO: rc: 1 Jul 1 13:14:00.867: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:14:10.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:14:10.987: INFO: rc: 1 Jul 1 13:14:10.987: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:14:20.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:14:21.089: INFO: rc: 1 Jul 1 13:14:21.089: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:14:31.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:14:31.195: INFO: rc: 1 Jul 1 13:14:31.195: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:14:41.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:14:41.407: INFO: rc: 1 Jul 1 13:14:41.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:14:51.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:14:56.122: INFO: rc: 1 Jul 1 13:14:56.122: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:15:06.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:15:06.242: INFO: rc: 1 Jul 1 13:15:06.242: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:15:16.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:15:16.336: INFO: rc: 1 Jul 1 13:15:16.336: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:15:26.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:15:26.438: INFO: rc: 1 Jul 1 13:15:26.439: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:15:36.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:15:36.543: INFO: rc: 1 Jul 1 13:15:36.543: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:15:46.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:15:46.649: INFO: rc: 1 Jul 1 13:15:46.649: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jul 1 13:15:56.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5502 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 13:15:56.752: INFO: rc: 1 Jul 1 13:15:56.752: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Jul 1 13:15:56.752: INFO: Scaling statefulset ss to 0 Jul 1 13:15:56.760: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 1 13:15:56.762: INFO: Deleting all statefulset in ns statefulset-5502 Jul 1 13:15:56.764: INFO: Scaling statefulset ss to 0 Jul 1 13:15:56.772: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 13:15:56.775: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:15:56.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5502" for this suite. • [SLOW TEST:360.586 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":148,"skipped":2443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:15:56.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:15:57.452: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:15:59.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:16:01.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206157, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:16:04.503: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:16:05.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-983" for this suite. STEP: Destroying namespace "webhook-983-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.461 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":149,"skipped":2468,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:16:06.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jul 1 13:16:06.965: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1748" to be "success or failure" Jul 1 13:16:06.987: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.351339ms Jul 1 13:16:08.991: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025903702s Jul 1 13:16:11.312: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346830147s Jul 1 13:16:13.316: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.350685993s STEP: Saw pod success Jul 1 13:16:13.316: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 1 13:16:13.319: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 1 13:16:13.645: INFO: Waiting for pod pod-host-path-test to disappear Jul 1 13:16:13.651: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:16:13.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1748" for this suite. • [SLOW TEST:7.440 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:16:13.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-2269c691-1481-4731-887e-06e37b7c3762 STEP: Creating a pod to test consume configMaps Jul 1 13:16:13.845: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902" in namespace "projected-1836" to be "success or failure" Jul 1 13:16:13.866: INFO: Pod "pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902": Phase="Pending", Reason="", readiness=false. Elapsed: 21.024ms Jul 1 13:16:15.909: INFO: Pod "pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064542279s Jul 1 13:16:17.914: INFO: Pod "pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06877369s STEP: Saw pod success Jul 1 13:16:17.914: INFO: Pod "pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902" satisfied condition "success or failure" Jul 1 13:16:17.917: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902 container projected-configmap-volume-test: STEP: delete the pod Jul 1 13:16:17.946: INFO: Waiting for pod pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902 to disappear Jul 1 13:16:17.950: INFO: Pod pod-projected-configmaps-124359d5-e9b1-43e4-b96b-e77d6ba53902 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:16:17.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1836" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2506,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:16:17.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Jul 1 13:16:18.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6524 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 1 13:16:18.262: INFO: stderr: "" Jul 1 13:16:18.262: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jul 1 13:16:18.262: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 1 13:16:18.262: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6524" to be "running and ready, or succeeded" Jul 1 13:16:18.280: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.135542ms Jul 1 13:16:20.476: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21389677s Jul 1 13:16:22.480: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.21824733s Jul 1 13:16:22.480: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 1 13:16:22.481: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 1 13:16:22.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6524' Jul 1 13:16:22.593: INFO: stderr: "" Jul 1 13:16:22.593: INFO: stdout: "I0701 13:16:21.700374 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/wxj 209\nI0701 13:16:21.900516 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/tj26 206\nI0701 13:16:22.100576 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/9qf 321\nI0701 13:16:22.300596 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/kmqh 510\nI0701 13:16:22.500519 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/kf2 568\n" STEP: limiting log lines Jul 1 13:16:22.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6524 --tail=1' Jul 1 13:16:22.701: INFO: stderr: "" Jul 1 13:16:22.701: INFO: stdout: "I0701 13:16:22.500519 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/kf2 568\n" Jul 1 13:16:22.701: INFO: got output "I0701 13:16:22.500519 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/kf2 568\n" STEP: limiting log bytes Jul 1 13:16:22.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6524 --limit-bytes=1' Jul 1 13:16:22.823: INFO: stderr: "" Jul 1 13:16:22.823: INFO: stdout: "I" Jul 1 13:16:22.823: INFO: got output "I" STEP: exposing timestamps Jul 1 13:16:22.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6524 --tail=1 --timestamps' Jul 1 13:16:22.939: INFO: stderr: "" Jul 1 13:16:22.939: INFO: stdout: "2020-07-01T13:16:22.900697987Z I0701 13:16:22.900536 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/kv2 373\n" Jul 1 13:16:22.939: INFO: got output "2020-07-01T13:16:22.900697987Z I0701 13:16:22.900536 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/kv2 373\n" STEP: restricting to a time range Jul 1 13:16:25.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6524 --since=1s' Jul 1 13:16:25.565: INFO: stderr: "" Jul 1 13:16:25.566: INFO: stdout: "I0701 13:16:24.700518 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/kg7 230\nI0701 13:16:24.900575 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/cmc8 387\nI0701 13:16:25.100540 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4wlv 207\nI0701 13:16:25.300559 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/prq 427\nI0701 13:16:25.500508 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/5kwh 305\n" Jul 1 13:16:25.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6524 --since=24h' Jul 1 13:16:25.667: INFO: stderr: "" Jul 1 13:16:25.667: INFO: stdout: "I0701 13:16:21.700374 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/wxj 209\nI0701 13:16:21.900516 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/tj26 206\nI0701 13:16:22.100576 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/9qf 321\nI0701 13:16:22.300596 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/kmqh 510\nI0701 13:16:22.500519 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/kf2 568\nI0701 13:16:22.700545 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/pgr 377\nI0701 13:16:22.900536 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/kv2 373\nI0701 13:16:23.100547 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/59j8 438\nI0701 13:16:23.300535 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/z6v7 275\nI0701 13:16:23.500531 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/vfjh 371\nI0701 13:16:23.700614 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/bknv 271\nI0701 13:16:23.900577 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/xkp 373\nI0701 13:16:24.100519 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/47s5 225\nI0701 13:16:24.300585 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/k6k 387\nI0701 13:16:24.500544 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/dxpg 288\nI0701 13:16:24.700518 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/kg7 230\nI0701 13:16:24.900575 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/cmc8 387\nI0701 13:16:25.100540 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4wlv 207\nI0701 13:16:25.300559 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/prq 427\nI0701 13:16:25.500508 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/5kwh 305\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Jul 1 13:16:25.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6524' Jul 1 13:16:28.299: INFO: stderr: "" Jul 1 13:16:28.299: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:16:28.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6524" for this suite. • [SLOW TEST:10.329 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":152,"skipped":2508,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:16:28.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-6fa7b3fe-6cb4-4a84-9da2-475bf4a3cf4f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:16:34.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8210" for this suite. • [SLOW TEST:6.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2517,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:16:34.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:16:47.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6250" for this suite. • [SLOW TEST:13.198 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":154,"skipped":2520,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:16:47.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 1 13:16:47.983: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jul 1 13:16:48.401: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 1 13:16:50.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:16:52.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206208, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:16:55.375: INFO: Waited 616.791474ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:16:56.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9173" for this suite. • [SLOW TEST:8.339 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":155,"skipped":2522,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:16:56.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:16:56.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec" in namespace "downward-api-8954" to be "success or failure" Jul 1 13:16:56.683: INFO: Pod "downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec": Phase="Pending", Reason="", readiness=false. Elapsed: 125.795015ms Jul 1 13:16:58.815: INFO: Pod "downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257844015s Jul 1 13:17:00.827: INFO: Pod "downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269193814s STEP: Saw pod success Jul 1 13:17:00.827: INFO: Pod "downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec" satisfied condition "success or failure" Jul 1 13:17:00.829: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec container client-container: STEP: delete the pod Jul 1 13:17:00.918: INFO: Waiting for pod downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec to disappear Jul 1 13:17:00.997: INFO: Pod downwardapi-volume-cc31ec8f-b83e-4a7f-926d-05f28edc41ec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8954" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2535,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:01.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-334dd532-48d5-4cac-bff5-22720fa71a63 STEP: Creating a pod to test consume configMaps Jul 1 13:17:01.062: INFO: Waiting up to 5m0s for pod "pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069" in namespace "configmap-4154" to be "success or failure" Jul 1 13:17:01.066: INFO: Pod "pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069": Phase="Pending", Reason="", readiness=false. Elapsed: 3.779551ms Jul 1 13:17:03.070: INFO: Pod "pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008132209s Jul 1 13:17:05.074: INFO: Pod "pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012252825s STEP: Saw pod success Jul 1 13:17:05.075: INFO: Pod "pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069" satisfied condition "success or failure" Jul 1 13:17:05.078: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069 container configmap-volume-test: STEP: delete the pod Jul 1 13:17:05.103: INFO: Waiting for pod pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069 to disappear Jul 1 13:17:05.108: INFO: Pod pod-configmaps-40a5e3ef-9f38-48b3-891f-4e6668925069 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:05.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4154" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2550,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:05.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 1 13:17:09.265: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5651 PodName:pod-sharedvolume-94462180-e5a5-4b4c-bf8e-b31a4ac8ccc6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 13:17:09.265: INFO: >>> kubeConfig: /root/.kube/config I0701 13:17:09.301400 6 log.go:172] (0xc00320abb0) (0xc000fb1680) Create stream I0701 13:17:09.301438 6 log.go:172] (0xc00320abb0) (0xc000fb1680) Stream added, broadcasting: 1 I0701 13:17:09.303180 6 log.go:172] (0xc00320abb0) Reply frame received for 1 I0701 13:17:09.303208 6 log.go:172] (0xc00320abb0) (0xc000fb1720) Create stream I0701 13:17:09.303218 6 log.go:172] (0xc00320abb0) (0xc000fb1720) Stream added, broadcasting: 3 I0701 13:17:09.304192 6 log.go:172] (0xc00320abb0) Reply frame received for 3 I0701 13:17:09.304251 6 log.go:172] (0xc00320abb0) (0xc002036780) Create stream I0701 13:17:09.304274 6 log.go:172] (0xc00320abb0) (0xc002036780) Stream added, broadcasting: 5 I0701 13:17:09.305363 6 log.go:172] (0xc00320abb0) Reply frame received for 5 I0701 13:17:09.369451 6 log.go:172] (0xc00320abb0) Data frame received for 3 I0701 13:17:09.369480 6 log.go:172] (0xc000fb1720) (3) Data frame handling I0701 13:17:09.369498 6 log.go:172] (0xc00320abb0) Data frame received for 5 I0701 13:17:09.369523 6 log.go:172] (0xc002036780) (5) Data frame handling I0701 13:17:09.369551 6 log.go:172] (0xc000fb1720) (3) Data frame sent I0701 13:17:09.369567 6 log.go:172] (0xc00320abb0) Data frame received for 3 I0701 13:17:09.369580 6 log.go:172] (0xc000fb1720) (3) Data frame handling I0701 13:17:09.370565 6 log.go:172] (0xc00320abb0) Data frame received for 1 I0701 13:17:09.370600 6 log.go:172] (0xc000fb1680) (1) Data frame handling I0701 13:17:09.370623 6 log.go:172] (0xc000fb1680) (1) Data frame sent I0701 13:17:09.370642 6 log.go:172] (0xc00320abb0) (0xc000fb1680) Stream removed, broadcasting: 1 I0701 13:17:09.370674 6 log.go:172] (0xc00320abb0) Go away received I0701 13:17:09.370742 6 log.go:172] (0xc00320abb0) (0xc000fb1680) Stream removed, broadcasting: 1 I0701 13:17:09.370761 6 log.go:172] (0xc00320abb0) (0xc000fb1720) Stream removed, broadcasting: 3 I0701 13:17:09.370770 6 log.go:172] (0xc00320abb0) (0xc002036780) Stream removed, broadcasting: 5 Jul 1 13:17:09.370: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:09.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5651" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":158,"skipped":2552,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:09.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 1 13:17:09.480: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 13:17:09.516: INFO: Waiting for terminating namespaces to be deleted... Jul 1 13:17:09.520: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 1 13:17:09.526: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:17:09.526: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:17:09.526: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:17:09.526: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 13:17:09.526: INFO: pod-sharedvolume-94462180-e5a5-4b4c-bf8e-b31a4ac8ccc6 from emptydir-5651 started at 2020-07-01 13:17:05 +0000 UTC (2 container statuses recorded) Jul 1 13:17:09.526: INFO: Container busybox-main-container ready: true, restart count 0 Jul 1 13:17:09.526: INFO: Container busybox-sub-container ready: true, restart count 0 Jul 1 13:17:09.526: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 1 13:17:09.531: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:17:09.531: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:17:09.531: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jul 1 13:17:09.531: INFO: Container kube-bench ready: false, restart count 0 Jul 1 13:17:09.531: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:17:09.531: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 13:17:09.531: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jul 1 13:17:09.531: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a3be12ea-936a-400e-bd46-22716136824d 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a3be12ea-936a-400e-bd46-22716136824d off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a3be12ea-936a-400e-bd46-22716136824d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:26.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1302" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.616 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":159,"skipped":2558,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:26.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:17:26.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c" in namespace "projected-3789" to be "success or failure" Jul 1 13:17:26.126: INFO: Pod "downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.435358ms Jul 1 13:17:28.130: INFO: Pod "downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019186687s Jul 1 13:17:30.134: INFO: Pod "downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022836466s STEP: Saw pod success Jul 1 13:17:30.134: INFO: Pod "downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c" satisfied condition "success or failure" Jul 1 13:17:30.136: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c container client-container: STEP: delete the pod Jul 1 13:17:30.381: INFO: Waiting for pod downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c to disappear Jul 1 13:17:30.443: INFO: Pod downwardapi-volume-8503fcfe-3abd-4416-85ca-8bce139c577c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:30.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3789" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2563,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:30.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:17:30.674: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c1bdc900-28b0-4759-8800-8764e78a8bcf" in namespace "security-context-test-797" to be "success or failure" Jul 1 13:17:30.726: INFO: Pod "alpine-nnp-false-c1bdc900-28b0-4759-8800-8764e78a8bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 52.096122ms Jul 1 13:17:32.966: INFO: Pod "alpine-nnp-false-c1bdc900-28b0-4759-8800-8764e78a8bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292431018s Jul 1 13:17:35.097: INFO: Pod "alpine-nnp-false-c1bdc900-28b0-4759-8800-8764e78a8bcf": Phase="Running", Reason="", readiness=true. Elapsed: 4.423206111s Jul 1 13:17:37.101: INFO: Pod "alpine-nnp-false-c1bdc900-28b0-4759-8800-8764e78a8bcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.427362541s Jul 1 13:17:37.101: INFO: Pod "alpine-nnp-false-c1bdc900-28b0-4759-8800-8764e78a8bcf" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:37.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-797" for this suite. • [SLOW TEST:6.533 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:37.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 1 13:17:37.219: INFO: Waiting up to 5m0s for pod "downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31" in namespace "downward-api-6518" to be "success or failure" Jul 1 13:17:37.253: INFO: Pod "downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31": Phase="Pending", Reason="", readiness=false. Elapsed: 33.70201ms Jul 1 13:17:39.256: INFO: Pod "downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036800104s Jul 1 13:17:41.260: INFO: Pod "downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31": Phase="Running", Reason="", readiness=true. Elapsed: 4.04085035s Jul 1 13:17:43.264: INFO: Pod "downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044668878s STEP: Saw pod success Jul 1 13:17:43.264: INFO: Pod "downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31" satisfied condition "success or failure" Jul 1 13:17:43.267: INFO: Trying to get logs from node jerma-worker pod downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31 container dapi-container: STEP: delete the pod Jul 1 13:17:43.291: INFO: Waiting for pod downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31 to disappear Jul 1 13:17:43.337: INFO: Pod downward-api-19df9a92-fd22-43f7-8b55-2ecfe3d28e31 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:43.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6518" for this suite. • [SLOW TEST:6.238 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2612,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:43.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-53c21959-6a35-493f-9e8e-23773f1e75d3 STEP: Creating a pod to test consume configMaps Jul 1 13:17:43.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08" in namespace "configmap-8371" to be "success or failure" Jul 1 13:17:43.492: INFO: Pod "pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.508016ms Jul 1 13:17:45.496: INFO: Pod "pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010659291s Jul 1 13:17:47.500: INFO: Pod "pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014217309s STEP: Saw pod success Jul 1 13:17:47.500: INFO: Pod "pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08" satisfied condition "success or failure" Jul 1 13:17:47.503: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08 container configmap-volume-test: STEP: delete the pod Jul 1 13:17:47.547: INFO: Waiting for pod pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08 to disappear Jul 1 13:17:47.552: INFO: Pod pod-configmaps-615962af-0fd2-4cf7-81a4-f2c1a6f97b08 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:47.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8371" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2613,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:47.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 1 13:17:52.161: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d57b3bed-2e02-41a4-a7db-33915ad088c8" Jul 1 13:17:52.161: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d57b3bed-2e02-41a4-a7db-33915ad088c8" in namespace "pods-1634" to be "terminated due to deadline exceeded" Jul 1 13:17:52.217: INFO: Pod "pod-update-activedeadlineseconds-d57b3bed-2e02-41a4-a7db-33915ad088c8": Phase="Running", Reason="", readiness=true. Elapsed: 56.242162ms Jul 1 13:17:54.219: INFO: Pod "pod-update-activedeadlineseconds-d57b3bed-2e02-41a4-a7db-33915ad088c8": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.058845251s Jul 1 13:17:54.219: INFO: Pod "pod-update-activedeadlineseconds-d57b3bed-2e02-41a4-a7db-33915ad088c8" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:17:54.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1634" for this suite. • [SLOW TEST:6.669 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2629,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:17:54.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:17:55.134: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:18:01.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:18:03.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206275, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:18:06.731: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:18:06.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7343" for this suite. STEP: Destroying namespace "webhook-7343-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.846 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":165,"skipped":2644,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:18:07.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 1 13:18:08.781: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 1 13:18:10.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206288, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206288, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206288, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206288, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:18:14.050: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:18:14.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:18:15.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6446" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.519 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":166,"skipped":2647,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:18:15.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-b4ebf9ca-e9a1-47e3-a924-62f57f025ad0 in namespace container-probe-3358 Jul 1 13:18:21.720: INFO: Started pod busybox-b4ebf9ca-e9a1-47e3-a924-62f57f025ad0 in namespace container-probe-3358 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 13:18:21.723: INFO: Initial restart count of pod busybox-b4ebf9ca-e9a1-47e3-a924-62f57f025ad0 is 0 Jul 1 13:19:07.837: INFO: Restart count of pod container-probe-3358/busybox-b4ebf9ca-e9a1-47e3-a924-62f57f025ad0 is now 1 (46.114493975s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:19:07.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3358" for this suite. • [SLOW TEST:52.307 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2662,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:19:07.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:19:07.939: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jul 1 13:19:10.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 create -f -' Jul 1 13:19:14.497: INFO: stderr: "" Jul 1 13:19:14.497: INFO: stdout: "e2e-test-crd-publish-openapi-2-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 1 13:19:14.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 delete e2e-test-crd-publish-openapi-2-crds test-foo' Jul 1 13:19:14.632: INFO: stderr: "" Jul 1 13:19:14.632: INFO: stdout: "e2e-test-crd-publish-openapi-2-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jul 1 13:19:14.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 apply -f -' Jul 1 13:19:14.903: INFO: stderr: "" Jul 1 13:19:14.903: INFO: stdout: "e2e-test-crd-publish-openapi-2-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 1 13:19:14.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 delete e2e-test-crd-publish-openapi-2-crds test-foo' Jul 1 13:19:15.005: INFO: stderr: "" Jul 1 13:19:15.005: INFO: stdout: "e2e-test-crd-publish-openapi-2-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jul 1 13:19:15.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 create -f -' Jul 1 13:19:15.265: INFO: rc: 1 Jul 1 13:19:15.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 apply -f -' Jul 1 13:19:15.524: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jul 1 13:19:15.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 create -f -' Jul 1 13:19:15.796: INFO: rc: 1 Jul 1 13:19:15.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1909 apply -f -' Jul 1 13:19:16.060: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jul 1 13:19:16.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2-crds' Jul 1 13:19:16.348: INFO: stderr: "" Jul 1 13:19:16.348: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jul 1 13:19:16.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2-crds.metadata' Jul 1 13:19:16.651: INFO: stderr: "" Jul 1 13:19:16.651: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jul 1 13:19:16.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2-crds.spec' Jul 1 13:19:16.911: INFO: stderr: "" Jul 1 13:19:16.912: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jul 1 13:19:16.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2-crds.spec.bars' Jul 1 13:19:17.200: INFO: stderr: "" Jul 1 13:19:17.200: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jul 1 13:19:17.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2-crds.spec.bars2' Jul 1 13:19:17.473: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:19:20.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1909" for this suite. • [SLOW TEST:12.495 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":168,"skipped":2684,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:19:20.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jul 1 13:19:20.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jul 1 13:19:20.719: INFO: stderr: "" Jul 1 13:19:20.719: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:19:20.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8147" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":169,"skipped":2685,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:19:20.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 1 13:19:25.374: INFO: Successfully updated pod "annotationupdate2d637a0c-f762-4e24-99a6-a6ab5ac725ac" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:19:29.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9495" for this suite. • [SLOW TEST:8.735 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:19:29.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-226546e4-e96e-454e-891d-1504a6605162 STEP: Creating a pod to test consume configMaps Jul 1 13:19:29.547: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33" in namespace "projected-8986" to be "success or failure" Jul 1 13:19:29.563: INFO: Pod "pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33": Phase="Pending", Reason="", readiness=false. Elapsed: 16.705047ms Jul 1 13:19:31.567: INFO: Pod "pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020365544s Jul 1 13:19:33.572: INFO: Pod "pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024883133s STEP: Saw pod success Jul 1 13:19:33.572: INFO: Pod "pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33" satisfied condition "success or failure" Jul 1 13:19:33.576: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33 container projected-configmap-volume-test: STEP: delete the pod Jul 1 13:19:33.670: INFO: Waiting for pod pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33 to disappear Jul 1 13:19:33.723: INFO: Pod pod-projected-configmaps-04484ebb-98ae-4380-ab3a-1d3b2f3fdc33 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:19:33.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8986" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2726,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:19:33.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:20:34.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6862" for this suite. • [SLOW TEST:61.152 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2748,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:20:34.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-191 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-191 STEP: Creating statefulset with conflicting port in namespace statefulset-191 STEP: Waiting until pod test-pod will start running in namespace statefulset-191 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-191 Jul 1 13:20:43.450: INFO: Observed stateful pod in namespace: statefulset-191, name: ss-0, uid: 42c76989-322e-45eb-ab9f-a015472e5505, status phase: Failed. Waiting for statefulset controller to delete. Jul 1 13:20:43.517: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-191 STEP: Removing pod with conflicting port in namespace statefulset-191 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-191 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 1 13:20:47.735: INFO: Deleting all statefulset in ns statefulset-191 Jul 1 13:20:47.738: INFO: Scaling statefulset ss to 0 Jul 1 13:20:57.759: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 13:20:57.762: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:20:57.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-191" for this suite. • [SLOW TEST:22.922 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":173,"skipped":2752,"failed":0} SSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:20:57.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6394 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6394;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6394 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6394;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6394.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6394.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6394.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6394.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6394.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6394.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6394.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.119.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.119.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.119.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.119.195_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6394 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6394;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6394 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6394;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6394.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6394.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6394.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6394.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6394.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6394.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6394.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6394.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6394.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.119.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.119.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.119.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.119.195_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 13:21:04.116: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.121: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.128: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.133: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.136: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.141: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.144: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.162: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.175: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.179: INFO: Unable to read jessie_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.182: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.287: INFO: Unable to read jessie_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.291: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.295: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.299: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:04.316: INFO: Lookups using dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6394 wheezy_tcp@dns-test-service.dns-6394 wheezy_udp@dns-test-service.dns-6394.svc wheezy_tcp@dns-test-service.dns-6394.svc wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6394 jessie_tcp@dns-test-service.dns-6394 jessie_udp@dns-test-service.dns-6394.svc jessie_tcp@dns-test-service.dns-6394.svc jessie_udp@_http._tcp.dns-test-service.dns-6394.svc jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc] Jul 1 13:21:09.322: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.325: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.328: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.331: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.334: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.336: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.339: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.341: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.357: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.360: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.363: INFO: Unable to read jessie_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.365: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.367: INFO: Unable to read jessie_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.370: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.372: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.374: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:09.390: INFO: Lookups using dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6394 wheezy_tcp@dns-test-service.dns-6394 wheezy_udp@dns-test-service.dns-6394.svc wheezy_tcp@dns-test-service.dns-6394.svc wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6394 jessie_tcp@dns-test-service.dns-6394 jessie_udp@dns-test-service.dns-6394.svc jessie_tcp@dns-test-service.dns-6394.svc jessie_udp@_http._tcp.dns-test-service.dns-6394.svc jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc] Jul 1 13:21:14.331: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.362: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.365: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.369: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.447: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.475: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.540: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.586: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.590: INFO: Unable to read jessie_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.596: INFO: Unable to read jessie_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.600: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:14.624: INFO: Lookups using dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6394 wheezy_tcp@dns-test-service.dns-6394 wheezy_udp@dns-test-service.dns-6394.svc wheezy_tcp@dns-test-service.dns-6394.svc wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6394 jessie_tcp@dns-test-service.dns-6394 jessie_udp@dns-test-service.dns-6394.svc jessie_tcp@dns-test-service.dns-6394.svc jessie_udp@_http._tcp.dns-test-service.dns-6394.svc jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc] Jul 1 13:21:19.321: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.326: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.329: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.335: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.338: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.340: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.343: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.367: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.370: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.372: INFO: Unable to read jessie_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.378: INFO: Unable to read jessie_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.381: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.383: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.386: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:19.406: INFO: Lookups using dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6394 wheezy_tcp@dns-test-service.dns-6394 wheezy_udp@dns-test-service.dns-6394.svc wheezy_tcp@dns-test-service.dns-6394.svc wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6394 jessie_tcp@dns-test-service.dns-6394 jessie_udp@dns-test-service.dns-6394.svc jessie_tcp@dns-test-service.dns-6394.svc jessie_udp@_http._tcp.dns-test-service.dns-6394.svc jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc] Jul 1 13:21:24.321: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.325: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.328: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.331: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.334: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.336: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.339: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.341: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.359: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.362: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.364: INFO: Unable to read jessie_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.367: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.370: INFO: Unable to read jessie_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.376: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.379: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:24.397: INFO: Lookups using dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6394 wheezy_tcp@dns-test-service.dns-6394 wheezy_udp@dns-test-service.dns-6394.svc wheezy_tcp@dns-test-service.dns-6394.svc wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6394 jessie_tcp@dns-test-service.dns-6394 jessie_udp@dns-test-service.dns-6394.svc jessie_tcp@dns-test-service.dns-6394.svc jessie_udp@_http._tcp.dns-test-service.dns-6394.svc jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc] Jul 1 13:21:29.322: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.325: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.328: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.335: INFO: Unable to read wheezy_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.338: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.340: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.343: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.364: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.367: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.370: INFO: Unable to read jessie_udp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394 from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.374: INFO: Unable to read jessie_udp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.376: INFO: Unable to read jessie_tcp@dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.379: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.381: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc from pod dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370: the server could not find the requested resource (get pods dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370) Jul 1 13:21:29.396: INFO: Lookups using dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6394 wheezy_tcp@dns-test-service.dns-6394 wheezy_udp@dns-test-service.dns-6394.svc wheezy_tcp@dns-test-service.dns-6394.svc wheezy_udp@_http._tcp.dns-test-service.dns-6394.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6394.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6394 jessie_tcp@dns-test-service.dns-6394 jessie_udp@dns-test-service.dns-6394.svc jessie_tcp@dns-test-service.dns-6394.svc jessie_udp@_http._tcp.dns-test-service.dns-6394.svc jessie_tcp@_http._tcp.dns-test-service.dns-6394.svc] Jul 1 13:21:34.474: INFO: DNS probes using dns-6394/dns-test-af311a44-fbc4-4b8c-8dd7-17ab025ab370 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:21:36.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6394" for this suite. • [SLOW TEST:38.749 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":174,"skipped":2756,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:21:36.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:21:38.132: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:21:40.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206498, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206498, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206498, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729206498, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:21:43.193: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:21:43.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:21:44.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2656" for this suite. STEP: Destroying namespace "webhook-2656-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.000 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":175,"skipped":2756,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:21:44.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jul 1 13:21:44.707: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:22:01.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8293" for this suite. • [SLOW TEST:16.508 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":176,"skipped":2759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:22:01.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 1 13:22:01.175: INFO: Waiting up to 5m0s for pod "downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f" in namespace "downward-api-3806" to be "success or failure" Jul 1 13:22:01.182: INFO: Pod "downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.072995ms Jul 1 13:22:03.186: INFO: Pod "downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010905303s Jul 1 13:22:05.216: INFO: Pod "downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040308418s STEP: Saw pod success Jul 1 13:22:05.216: INFO: Pod "downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f" satisfied condition "success or failure" Jul 1 13:22:05.219: INFO: Trying to get logs from node jerma-worker pod downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f container dapi-container: STEP: delete the pod Jul 1 13:22:05.439: INFO: Waiting for pod downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f to disappear Jul 1 13:22:05.470: INFO: Pod downward-api-663f6c95-06e7-4de0-93cc-fdf1a778066f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:22:05.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3806" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2787,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:22:05.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7419 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7419 to expose endpoints map[] Jul 1 13:22:05.961: INFO: Get endpoints failed (30.496172ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 1 13:22:06.967: INFO: successfully validated that service endpoint-test2 in namespace services-7419 exposes endpoints map[] (1.035645197s elapsed) STEP: Creating pod pod1 in namespace services-7419 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7419 to expose endpoints map[pod1:[80]] Jul 1 13:22:11.042: INFO: successfully validated that service endpoint-test2 in namespace services-7419 exposes endpoints map[pod1:[80]] (4.066336605s elapsed) STEP: Creating pod pod2 in namespace services-7419 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7419 to expose endpoints map[pod1:[80] pod2:[80]] Jul 1 13:22:15.243: INFO: Unexpected endpoints: found map[b7dcbf6b-d26a-4040-b495-b27c1be6ab73:[80]], expected map[pod1:[80] pod2:[80]] (4.198019965s elapsed, will retry) Jul 1 13:22:16.250: INFO: successfully validated that service endpoint-test2 in namespace services-7419 exposes endpoints map[pod1:[80] pod2:[80]] (5.204955343s elapsed) STEP: Deleting pod pod1 in namespace services-7419 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7419 to expose endpoints map[pod2:[80]] Jul 1 13:22:17.315: INFO: successfully validated that service endpoint-test2 in namespace services-7419 exposes endpoints map[pod2:[80]] (1.061368434s elapsed) STEP: Deleting pod pod2 in namespace services-7419 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7419 to expose endpoints map[] Jul 1 13:22:18.377: INFO: successfully validated that service endpoint-test2 in namespace services-7419 exposes endpoints map[] (1.057569247s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:22:18.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7419" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:13.109 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":178,"skipped":2807,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:22:18.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0701 13:22:59.076090 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 13:22:59.076: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:22:59.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9052" for this suite. • [SLOW TEST:40.496 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":179,"skipped":2810,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:22:59.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8236.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8236.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 13:23:05.474: INFO: DNS probes using dns-test-8d4e690d-67cd-49ac-a5de-52163db41842 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8236.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8236.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 13:23:14.834: INFO: File wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:14.837: INFO: File jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:14.837: INFO: Lookups using dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 failed for: [wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local] Jul 1 13:23:19.842: INFO: File wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:19.847: INFO: File jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:19.847: INFO: Lookups using dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 failed for: [wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local] Jul 1 13:23:24.842: INFO: File wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:24.845: INFO: File jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:24.845: INFO: Lookups using dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 failed for: [wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local] Jul 1 13:23:29.842: INFO: File wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:29.845: INFO: File jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local from pod dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 13:23:29.845: INFO: Lookups using dns-8236/dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 failed for: [wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local] Jul 1 13:23:34.845: INFO: DNS probes using dns-test-122fed00-6ba5-40b5-a8d7-e57c31d208a2 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8236.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8236.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8236.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8236.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 13:23:44.411: INFO: DNS probes using dns-test-ad5ea229-061b-48d5-b2cd-acb3dd5d477a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:23:45.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8236" for this suite. • [SLOW TEST:46.514 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":180,"skipped":2815,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:23:45.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3947 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3947 I0701 13:23:46.271202 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3947, replica count: 2 I0701 13:23:49.321636 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:23:52.322164 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 13:23:52.322: INFO: Creating new exec pod Jul 1 13:23:57.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3947 execpod4vb2v -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 1 13:23:57.714: INFO: stderr: "I0701 13:23:57.458136 2806 log.go:172] (0xc000652b00) (0xc000af00a0) Create stream\nI0701 13:23:57.458191 2806 log.go:172] (0xc000652b00) (0xc000af00a0) Stream added, broadcasting: 1\nI0701 13:23:57.460005 2806 log.go:172] (0xc000652b00) Reply frame received for 1\nI0701 13:23:57.460104 2806 log.go:172] (0xc000652b00) (0xc000af8140) Create stream\nI0701 13:23:57.460142 2806 log.go:172] (0xc000652b00) (0xc000af8140) Stream added, broadcasting: 3\nI0701 13:23:57.461984 2806 log.go:172] (0xc000652b00) Reply frame received for 3\nI0701 13:23:57.462011 2806 log.go:172] (0xc000652b00) (0xc0005fda40) Create stream\nI0701 13:23:57.462018 2806 log.go:172] (0xc000652b00) (0xc0005fda40) Stream added, broadcasting: 5\nI0701 13:23:57.462899 2806 log.go:172] (0xc000652b00) Reply frame received for 5\nI0701 13:23:57.610323 2806 log.go:172] (0xc000652b00) Data frame received for 5\nI0701 13:23:57.610351 2806 log.go:172] (0xc0005fda40) (5) Data frame handling\nI0701 13:23:57.610383 2806 log.go:172] (0xc0005fda40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0701 13:23:57.705473 2806 log.go:172] (0xc000652b00) Data frame received for 5\nI0701 13:23:57.705627 2806 log.go:172] (0xc0005fda40) (5) Data frame handling\nI0701 13:23:57.705675 2806 log.go:172] (0xc0005fda40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0701 13:23:57.706049 2806 log.go:172] (0xc000652b00) Data frame received for 3\nI0701 13:23:57.706106 2806 log.go:172] (0xc000af8140) (3) Data frame handling\nI0701 13:23:57.706153 2806 log.go:172] (0xc000652b00) Data frame received for 5\nI0701 13:23:57.706193 2806 log.go:172] (0xc0005fda40) (5) Data frame handling\nI0701 13:23:57.707739 2806 log.go:172] (0xc000652b00) Data frame received for 1\nI0701 13:23:57.707752 2806 log.go:172] (0xc000af00a0) (1) Data frame handling\nI0701 13:23:57.707759 2806 log.go:172] (0xc000af00a0) (1) Data frame sent\nI0701 13:23:57.707772 2806 log.go:172] (0xc000652b00) (0xc000af00a0) Stream removed, broadcasting: 1\nI0701 13:23:57.707788 2806 log.go:172] (0xc000652b00) Go away received\nI0701 13:23:57.708276 2806 log.go:172] (0xc000652b00) (0xc000af00a0) Stream removed, broadcasting: 1\nI0701 13:23:57.708301 2806 log.go:172] (0xc000652b00) (0xc000af8140) Stream removed, broadcasting: 3\nI0701 13:23:57.708312 2806 log.go:172] (0xc000652b00) (0xc0005fda40) Stream removed, broadcasting: 5\n" Jul 1 13:23:57.714: INFO: stdout: "" Jul 1 13:23:57.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3947 execpod4vb2v -- /bin/sh -x -c nc -zv -t -w 2 10.98.120.85 80' Jul 1 13:23:57.930: INFO: stderr: "I0701 13:23:57.846333 2826 log.go:172] (0xc0000f4dc0) (0xc00074e1e0) Create stream\nI0701 13:23:57.846399 2826 log.go:172] (0xc0000f4dc0) (0xc00074e1e0) Stream added, broadcasting: 1\nI0701 13:23:57.848893 2826 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0701 13:23:57.848958 2826 log.go:172] (0xc0000f4dc0) (0xc000487900) Create stream\nI0701 13:23:57.848985 2826 log.go:172] (0xc0000f4dc0) (0xc000487900) Stream added, broadcasting: 3\nI0701 13:23:57.850010 2826 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0701 13:23:57.850048 2826 log.go:172] (0xc0000f4dc0) (0xc0004d2000) Create stream\nI0701 13:23:57.850059 2826 log.go:172] (0xc0000f4dc0) (0xc0004d2000) Stream added, broadcasting: 5\nI0701 13:23:57.850787 2826 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0701 13:23:57.921941 2826 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0701 13:23:57.921986 2826 log.go:172] (0xc000487900) (3) Data frame handling\nI0701 13:23:57.922012 2826 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0701 13:23:57.922025 2826 log.go:172] (0xc0004d2000) (5) Data frame handling\nI0701 13:23:57.922038 2826 log.go:172] (0xc0004d2000) (5) Data frame sent\nI0701 13:23:57.922049 2826 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0701 13:23:57.922061 2826 log.go:172] (0xc0004d2000) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.120.85 80\nConnection to 10.98.120.85 80 port [tcp/http] succeeded!\nI0701 13:23:57.923466 2826 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0701 13:23:57.923495 2826 log.go:172] (0xc00074e1e0) (1) Data frame handling\nI0701 13:23:57.923510 2826 log.go:172] (0xc00074e1e0) (1) Data frame sent\nI0701 13:23:57.923526 2826 log.go:172] (0xc0000f4dc0) (0xc00074e1e0) Stream removed, broadcasting: 1\nI0701 13:23:57.923546 2826 log.go:172] (0xc0000f4dc0) Go away received\nI0701 13:23:57.923846 2826 log.go:172] (0xc0000f4dc0) (0xc00074e1e0) Stream removed, broadcasting: 1\nI0701 13:23:57.923867 2826 log.go:172] (0xc0000f4dc0) (0xc000487900) Stream removed, broadcasting: 3\nI0701 13:23:57.923875 2826 log.go:172] (0xc0000f4dc0) (0xc0004d2000) Stream removed, broadcasting: 5\n" Jul 1 13:23:57.930: INFO: stdout: "" Jul 1 13:23:57.930: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:23:58.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3947" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.423 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":181,"skipped":2819,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:23:58.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:23:58.071: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 1 13:24:01.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 create -f -' Jul 1 13:24:06.230: INFO: stderr: "" Jul 1 13:24:06.230: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 1 13:24:06.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 delete e2e-test-crd-publish-openapi-8687-crds test-cr' Jul 1 13:24:06.365: INFO: stderr: "" Jul 1 13:24:06.365: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jul 1 13:24:06.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 apply -f -' Jul 1 13:24:06.621: INFO: stderr: "" Jul 1 13:24:06.621: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 1 13:24:06.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 delete e2e-test-crd-publish-openapi-8687-crds test-cr' Jul 1 13:24:06.735: INFO: stderr: "" Jul 1 13:24:06.735: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jul 1 13:24:06.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8687-crds' Jul 1 13:24:07.072: INFO: stderr: "" Jul 1 13:24:07.072: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8687-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:24:09.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7670" for this suite. • [SLOW TEST:11.009 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":182,"skipped":2841,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:24:09.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 13:24:09.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7266' Jul 1 13:24:09.233: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 13:24:09.233: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jul 1 13:24:09.277: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-6jbht] Jul 1 13:24:09.277: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-6jbht" in namespace "kubectl-7266" to be "running and ready" Jul 1 13:24:09.282: INFO: Pod "e2e-test-httpd-rc-6jbht": Phase="Pending", Reason="", readiness=false. Elapsed: 5.314087ms Jul 1 13:24:11.313: INFO: Pod "e2e-test-httpd-rc-6jbht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035774868s Jul 1 13:24:13.316: INFO: Pod "e2e-test-httpd-rc-6jbht": Phase="Running", Reason="", readiness=true. Elapsed: 4.039110334s Jul 1 13:24:13.316: INFO: Pod "e2e-test-httpd-rc-6jbht" satisfied condition "running and ready" Jul 1 13:24:13.316: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-6jbht] Jul 1 13:24:13.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7266' Jul 1 13:24:13.457: INFO: stderr: "" Jul 1 13:24:13.457: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.40. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.40. Set the 'ServerName' directive globally to suppress this message\n[Wed Jul 01 13:24:11.643902 2020] [mpm_event:notice] [pid 1:tid 140158970764136] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Jul 01 13:24:11.643956 2020] [core:notice] [pid 1:tid 140158970764136] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Jul 1 13:24:13.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7266' Jul 1 13:24:13.565: INFO: stderr: "" Jul 1 13:24:13.565: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:24:13.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7266" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":183,"skipped":2859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:24:13.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:24:13.655: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:24:14.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5424" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":184,"skipped":2891,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:24:14.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Jul 1 13:24:15.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7189' Jul 1 13:24:15.366: INFO: stderr: "" Jul 1 13:24:15.366: INFO: stdout: "pod/pause created\n" Jul 1 13:24:15.366: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 1 13:24:15.366: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7189" to be "running and ready" Jul 1 13:24:15.369: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736787ms Jul 1 13:24:17.471: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104393332s Jul 1 13:24:19.475: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.108997421s Jul 1 13:24:19.475: INFO: Pod "pause" satisfied condition "running and ready" Jul 1 13:24:19.475: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jul 1 13:24:19.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7189' Jul 1 13:24:19.600: INFO: stderr: "" Jul 1 13:24:19.600: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 1 13:24:19.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7189' Jul 1 13:24:19.699: INFO: stderr: "" Jul 1 13:24:19.699: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 1 13:24:19.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7189' Jul 1 13:24:19.798: INFO: stderr: "" Jul 1 13:24:19.798: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 1 13:24:19.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7189' Jul 1 13:24:19.902: INFO: stderr: "" Jul 1 13:24:19.902: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Jul 1 13:24:19.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7189' Jul 1 13:24:20.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 13:24:20.059: INFO: stdout: "pod \"pause\" force deleted\n" Jul 1 13:24:20.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7189' Jul 1 13:24:20.156: INFO: stderr: "No resources found in kubectl-7189 namespace.\n" Jul 1 13:24:20.156: INFO: stdout: "" Jul 1 13:24:20.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7189 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 13:24:20.253: INFO: stderr: "" Jul 1 13:24:20.253: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:24:20.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7189" for this suite. • [SLOW TEST:5.447 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":185,"skipped":2912,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:24:20.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 1 13:24:20.426: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790160 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 13:24:20.426: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790160 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 1 13:24:30.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790203 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 1 13:24:30.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790203 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 1 13:24:40.444: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790233 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 13:24:40.444: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790233 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 1 13:24:50.666: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790263 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 13:24:50.667: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-a e18b7eb3-d992-42be-876a-e0d2cbad21dc 28790263 0 2020-07-01 13:24:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 1 13:25:00.673: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-b 6a77037d-60e9-4311-b2ca-f5aea9e95578 28790291 0 2020-07-01 13:25:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 13:25:00.673: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-b 6a77037d-60e9-4311-b2ca-f5aea9e95578 28790291 0 2020-07-01 13:25:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 1 13:25:10.680: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-b 6a77037d-60e9-4311-b2ca-f5aea9e95578 28790321 0 2020-07-01 13:25:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 13:25:10.680: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-configmap-b 6a77037d-60e9-4311-b2ca-f5aea9e95578 28790321 0 2020-07-01 13:25:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:25:20.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9026" for this suite. • [SLOW TEST:60.432 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":186,"skipped":2920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:25:20.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:25:20.819: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:25:26.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-119" for this suite. • [SLOW TEST:6.163 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":187,"skipped":2949,"failed":0} [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:25:26.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-37995fd6-893f-41c4-8600-906a47d2431a in namespace container-probe-9640 Jul 1 13:25:33.019: INFO: Started pod busybox-37995fd6-893f-41c4-8600-906a47d2431a in namespace container-probe-9640 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 13:25:33.021: INFO: Initial restart count of pod busybox-37995fd6-893f-41c4-8600-906a47d2431a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:29:34.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9640" for this suite. • [SLOW TEST:247.620 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:29:34.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jul 1 13:29:34.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5352 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jul 1 13:29:39.372: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0701 13:29:39.300432 3187 log.go:172] (0xc000b680b0) (0xc000972000) Create stream\nI0701 13:29:39.300483 3187 log.go:172] (0xc000b680b0) (0xc000972000) Stream added, broadcasting: 1\nI0701 13:29:39.302085 3187 log.go:172] (0xc000b680b0) Reply frame received for 1\nI0701 13:29:39.302136 3187 log.go:172] (0xc000b680b0) (0xc000703a40) Create stream\nI0701 13:29:39.302152 3187 log.go:172] (0xc000b680b0) (0xc000703a40) Stream added, broadcasting: 3\nI0701 13:29:39.302858 3187 log.go:172] (0xc000b680b0) Reply frame received for 3\nI0701 13:29:39.302893 3187 log.go:172] (0xc000b680b0) (0xc000703ae0) Create stream\nI0701 13:29:39.302906 3187 log.go:172] (0xc000b680b0) (0xc000703ae0) Stream added, broadcasting: 5\nI0701 13:29:39.303744 3187 log.go:172] (0xc000b680b0) Reply frame received for 5\nI0701 13:29:39.303782 3187 log.go:172] (0xc000b680b0) (0xc0003d8000) Create stream\nI0701 13:29:39.303795 3187 log.go:172] (0xc000b680b0) (0xc0003d8000) Stream added, broadcasting: 7\nI0701 13:29:39.304627 3187 log.go:172] (0xc000b680b0) Reply frame received for 7\nI0701 13:29:39.304748 3187 log.go:172] (0xc000703a40) (3) Writing data frame\nI0701 13:29:39.304825 3187 log.go:172] (0xc000703a40) (3) Writing data frame\nI0701 13:29:39.305566 3187 log.go:172] (0xc000b680b0) Data frame received for 5\nI0701 13:29:39.305586 3187 log.go:172] (0xc000703ae0) (5) Data frame handling\nI0701 13:29:39.305602 3187 log.go:172] (0xc000703ae0) (5) Data frame sent\nI0701 13:29:39.306250 3187 log.go:172] (0xc000b680b0) Data frame received for 5\nI0701 13:29:39.306269 3187 log.go:172] (0xc000703ae0) (5) Data frame handling\nI0701 13:29:39.306285 3187 log.go:172] (0xc000703ae0) (5) Data frame sent\nI0701 13:29:39.345290 3187 log.go:172] (0xc000b680b0) Data frame received for 7\nI0701 13:29:39.345366 3187 log.go:172] (0xc0003d8000) (7) Data frame handling\nI0701 13:29:39.345489 3187 log.go:172] (0xc000b680b0) Data frame received for 5\nI0701 13:29:39.345502 3187 log.go:172] (0xc000703ae0) (5) Data frame handling\nI0701 13:29:39.345935 3187 log.go:172] (0xc000b680b0) Data frame received for 1\nI0701 13:29:39.345957 3187 log.go:172] (0xc000972000) (1) Data frame handling\nI0701 13:29:39.345977 3187 log.go:172] (0xc000972000) (1) Data frame sent\nI0701 13:29:39.345990 3187 log.go:172] (0xc000b680b0) (0xc000972000) Stream removed, broadcasting: 1\nI0701 13:29:39.346082 3187 log.go:172] (0xc000b680b0) (0xc000703a40) Stream removed, broadcasting: 3\nI0701 13:29:39.346114 3187 log.go:172] (0xc000b680b0) Go away received\nI0701 13:29:39.346367 3187 log.go:172] (0xc000b680b0) (0xc000972000) Stream removed, broadcasting: 1\nI0701 13:29:39.346388 3187 log.go:172] (0xc000b680b0) (0xc000703a40) Stream removed, broadcasting: 3\nI0701 13:29:39.346398 3187 log.go:172] (0xc000b680b0) (0xc000703ae0) Stream removed, broadcasting: 5\nI0701 13:29:39.346408 3187 log.go:172] (0xc000b680b0) (0xc0003d8000) Stream removed, broadcasting: 7\n" Jul 1 13:29:39.372: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:29:41.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5352" for this suite. • [SLOW TEST:6.932 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":189,"skipped":3006,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:29:41.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b7c1c31a-7e0f-4a7b-9d5f-5a954b421af3 STEP: Creating a pod to test consume configMaps Jul 1 13:29:41.482: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713" in namespace "configmap-6507" to be "success or failure" Jul 1 13:29:41.486: INFO: Pod "pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713": Phase="Pending", Reason="", readiness=false. Elapsed: 3.607961ms Jul 1 13:29:43.522: INFO: Pod "pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039872776s Jul 1 13:29:45.525: INFO: Pod "pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043058908s STEP: Saw pod success Jul 1 13:29:45.525: INFO: Pod "pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713" satisfied condition "success or failure" Jul 1 13:29:45.527: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713 container configmap-volume-test: STEP: delete the pod Jul 1 13:29:45.566: INFO: Waiting for pod pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713 to disappear Jul 1 13:29:45.570: INFO: Pod pod-configmaps-6e9f0657-28aa-44de-8722-07ba903bb713 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:29:45.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6507" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3006,"failed":0} ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:29:45.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 1 13:29:46.078: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 1 13:29:51.082: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:29:52.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2950" for this suite. • [SLOW TEST:6.554 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":191,"skipped":3006,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:29:52.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:29:52.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056" in namespace "projected-7009" to be "success or failure" Jul 1 13:29:52.432: INFO: Pod "downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056": Phase="Pending", Reason="", readiness=false. Elapsed: 52.062346ms Jul 1 13:29:54.504: INFO: Pod "downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124615025s Jul 1 13:29:56.508: INFO: Pod "downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128395547s STEP: Saw pod success Jul 1 13:29:56.508: INFO: Pod "downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056" satisfied condition "success or failure" Jul 1 13:29:56.510: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056 container client-container: STEP: delete the pod Jul 1 13:29:56.564: INFO: Waiting for pod downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056 to disappear Jul 1 13:29:56.684: INFO: Pod downwardapi-volume-c670025d-c11b-498d-a09d-5cb8146d6056 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:29:56.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7009" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3074,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:29:56.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-142 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 13:29:56.905: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 13:30:19.061: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.47:8080/dial?request=hostname&protocol=http&host=10.244.1.46&port=8080&tries=1'] Namespace:pod-network-test-142 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 13:30:19.061: INFO: >>> kubeConfig: /root/.kube/config I0701 13:30:19.091047 6 log.go:172] (0xc0028ca2c0) (0xc001091220) Create stream I0701 13:30:19.091082 6 log.go:172] (0xc0028ca2c0) (0xc001091220) Stream added, broadcasting: 1 I0701 13:30:19.092887 6 log.go:172] (0xc0028ca2c0) Reply frame received for 1 I0701 13:30:19.092927 6 log.go:172] (0xc0028ca2c0) (0xc0020b4000) Create stream I0701 13:30:19.092936 6 log.go:172] (0xc0028ca2c0) (0xc0020b4000) Stream added, broadcasting: 3 I0701 13:30:19.094194 6 log.go:172] (0xc0028ca2c0) Reply frame received for 3 I0701 13:30:19.094239 6 log.go:172] (0xc0028ca2c0) (0xc0010915e0) Create stream I0701 13:30:19.094254 6 log.go:172] (0xc0028ca2c0) (0xc0010915e0) Stream added, broadcasting: 5 I0701 13:30:19.095083 6 log.go:172] (0xc0028ca2c0) Reply frame received for 5 I0701 13:30:19.372262 6 log.go:172] (0xc0028ca2c0) Data frame received for 3 I0701 13:30:19.372286 6 log.go:172] (0xc0020b4000) (3) Data frame handling I0701 13:30:19.372300 6 log.go:172] (0xc0020b4000) (3) Data frame sent I0701 13:30:19.372655 6 log.go:172] (0xc0028ca2c0) Data frame received for 5 I0701 13:30:19.372675 6 log.go:172] (0xc0010915e0) (5) Data frame handling I0701 13:30:19.372764 6 log.go:172] (0xc0028ca2c0) Data frame received for 3 I0701 13:30:19.372779 6 log.go:172] (0xc0020b4000) (3) Data frame handling I0701 13:30:19.374690 6 log.go:172] (0xc0028ca2c0) Data frame received for 1 I0701 13:30:19.374708 6 log.go:172] (0xc001091220) (1) Data frame handling I0701 13:30:19.374720 6 log.go:172] (0xc001091220) (1) Data frame sent I0701 13:30:19.374739 6 log.go:172] (0xc0028ca2c0) (0xc001091220) Stream removed, broadcasting: 1 I0701 13:30:19.374827 6 log.go:172] (0xc0028ca2c0) (0xc001091220) Stream removed, broadcasting: 1 I0701 13:30:19.374843 6 log.go:172] (0xc0028ca2c0) (0xc0020b4000) Stream removed, broadcasting: 3 I0701 13:30:19.374920 6 log.go:172] (0xc0028ca2c0) Go away received I0701 13:30:19.374976 6 log.go:172] (0xc0028ca2c0) (0xc0010915e0) Stream removed, broadcasting: 5 Jul 1 13:30:19.375: INFO: Waiting for responses: map[] Jul 1 13:30:19.378: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.47:8080/dial?request=hostname&protocol=http&host=10.244.2.81&port=8080&tries=1'] Namespace:pod-network-test-142 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 13:30:19.378: INFO: >>> kubeConfig: /root/.kube/config I0701 13:30:19.406424 6 log.go:172] (0xc0027b26e0) (0xc0020b5720) Create stream I0701 13:30:19.406451 6 log.go:172] (0xc0027b26e0) (0xc0020b5720) Stream added, broadcasting: 1 I0701 13:30:19.408800 6 log.go:172] (0xc0027b26e0) Reply frame received for 1 I0701 13:30:19.408841 6 log.go:172] (0xc0027b26e0) (0xc000bdcc80) Create stream I0701 13:30:19.408856 6 log.go:172] (0xc0027b26e0) (0xc000bdcc80) Stream added, broadcasting: 3 I0701 13:30:19.410063 6 log.go:172] (0xc0027b26e0) Reply frame received for 3 I0701 13:30:19.410088 6 log.go:172] (0xc0027b26e0) (0xc001091f40) Create stream I0701 13:30:19.410100 6 log.go:172] (0xc0027b26e0) (0xc001091f40) Stream added, broadcasting: 5 I0701 13:30:19.411216 6 log.go:172] (0xc0027b26e0) Reply frame received for 5 I0701 13:30:19.483910 6 log.go:172] (0xc0027b26e0) Data frame received for 3 I0701 13:30:19.483937 6 log.go:172] (0xc000bdcc80) (3) Data frame handling I0701 13:30:19.483956 6 log.go:172] (0xc000bdcc80) (3) Data frame sent I0701 13:30:19.484647 6 log.go:172] (0xc0027b26e0) Data frame received for 3 I0701 13:30:19.484668 6 log.go:172] (0xc000bdcc80) (3) Data frame handling I0701 13:30:19.484823 6 log.go:172] (0xc0027b26e0) Data frame received for 5 I0701 13:30:19.484835 6 log.go:172] (0xc001091f40) (5) Data frame handling I0701 13:30:19.486770 6 log.go:172] (0xc0027b26e0) Data frame received for 1 I0701 13:30:19.486810 6 log.go:172] (0xc0020b5720) (1) Data frame handling I0701 13:30:19.486837 6 log.go:172] (0xc0020b5720) (1) Data frame sent I0701 13:30:19.486854 6 log.go:172] (0xc0027b26e0) (0xc0020b5720) Stream removed, broadcasting: 1 I0701 13:30:19.486905 6 log.go:172] (0xc0027b26e0) Go away received I0701 13:30:19.487043 6 log.go:172] (0xc0027b26e0) (0xc0020b5720) Stream removed, broadcasting: 1 I0701 13:30:19.487080 6 log.go:172] (0xc0027b26e0) (0xc000bdcc80) Stream removed, broadcasting: 3 I0701 13:30:19.487099 6 log.go:172] (0xc0027b26e0) (0xc001091f40) Stream removed, broadcasting: 5 Jul 1 13:30:19.487: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:30:19.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-142" for this suite. • [SLOW TEST:22.800 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3085,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:30:19.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:30:19.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1825" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":194,"skipped":3097,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:30:19.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3952ff03-2a71-4a5c-834c-dc26448e6c32 STEP: Creating a pod to test consume secrets Jul 1 13:30:19.860: INFO: Waiting up to 5m0s for pod "pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e" in namespace "secrets-3419" to be "success or failure" Jul 1 13:30:19.863: INFO: Pod "pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227048ms Jul 1 13:30:21.867: INFO: Pod "pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007182792s Jul 1 13:30:23.872: INFO: Pod "pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01164576s STEP: Saw pod success Jul 1 13:30:23.872: INFO: Pod "pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e" satisfied condition "success or failure" Jul 1 13:30:23.874: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e container secret-volume-test: STEP: delete the pod Jul 1 13:30:23.895: INFO: Waiting for pod pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e to disappear Jul 1 13:30:23.906: INFO: Pod pod-secrets-c91c7e84-ab03-45de-934f-1f644a3d328e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:30:23.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3419" for this suite. STEP: Destroying namespace "secret-namespace-6286" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3104,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:30:23.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:30:30.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1833" for this suite. • [SLOW TEST:6.875 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":196,"skipped":3125,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:30:30.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-17363e1d-f9e7-4069-ac41-0a59bdd7c330 STEP: Creating a pod to test consume secrets Jul 1 13:30:31.346: INFO: Waiting up to 5m0s for pod "pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396" in namespace "secrets-1204" to be "success or failure" Jul 1 13:30:31.350: INFO: Pod "pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674909ms Jul 1 13:30:33.457: INFO: Pod "pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110888024s Jul 1 13:30:35.462: INFO: Pod "pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116050122s STEP: Saw pod success Jul 1 13:30:35.462: INFO: Pod "pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396" satisfied condition "success or failure" Jul 1 13:30:35.466: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396 container secret-volume-test: STEP: delete the pod Jul 1 13:30:35.541: INFO: Waiting for pod pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396 to disappear Jul 1 13:30:35.565: INFO: Pod pod-secrets-ebecff20-1d00-446d-b583-a22868bd6396 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:30:35.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1204" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3135,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:30:35.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:30:35.653: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 1 13:30:37.699: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:30:38.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5684" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":198,"skipped":3140,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:30:38.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jul 1 13:30:39.076: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jul 1 13:30:49.801: INFO: >>> kubeConfig: /root/.kube/config Jul 1 13:30:53.259: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:31:04.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2338" for this suite. • [SLOW TEST:25.988 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":199,"skipped":3140,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:31:04.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-3761 STEP: creating replication controller nodeport-test in namespace services-3761 I0701 13:31:05.050878 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3761, replica count: 2 I0701 13:31:08.101363 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:31:11.101638 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 13:31:11.101: INFO: Creating new exec pod Jul 1 13:31:16.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3761 execpodh7vp6 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jul 1 13:31:16.416: INFO: stderr: "I0701 13:31:16.265979 3212 log.go:172] (0xc0000f5130) (0xc000741ea0) Create stream\nI0701 13:31:16.266042 3212 log.go:172] (0xc0000f5130) (0xc000741ea0) Stream added, broadcasting: 1\nI0701 13:31:16.268344 3212 log.go:172] (0xc0000f5130) Reply frame received for 1\nI0701 13:31:16.268380 3212 log.go:172] (0xc0000f5130) (0xc0005534a0) Create stream\nI0701 13:31:16.268389 3212 log.go:172] (0xc0000f5130) (0xc0005534a0) Stream added, broadcasting: 3\nI0701 13:31:16.272297 3212 log.go:172] (0xc0000f5130) Reply frame received for 3\nI0701 13:31:16.272366 3212 log.go:172] (0xc0000f5130) (0xc000741f40) Create stream\nI0701 13:31:16.272385 3212 log.go:172] (0xc0000f5130) (0xc000741f40) Stream added, broadcasting: 5\nI0701 13:31:16.273681 3212 log.go:172] (0xc0000f5130) Reply frame received for 5\nI0701 13:31:16.378097 3212 log.go:172] (0xc0000f5130) Data frame received for 5\nI0701 13:31:16.378125 3212 log.go:172] (0xc000741f40) (5) Data frame handling\nI0701 13:31:16.378138 3212 log.go:172] (0xc000741f40) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0701 13:31:16.404976 3212 log.go:172] (0xc0000f5130) Data frame received for 5\nI0701 13:31:16.405010 3212 log.go:172] (0xc000741f40) (5) Data frame handling\nI0701 13:31:16.405042 3212 log.go:172] (0xc000741f40) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0701 13:31:16.405298 3212 log.go:172] (0xc0000f5130) Data frame received for 3\nI0701 13:31:16.405317 3212 log.go:172] (0xc0005534a0) (3) Data frame handling\nI0701 13:31:16.405716 3212 log.go:172] (0xc0000f5130) Data frame received for 5\nI0701 13:31:16.405738 3212 log.go:172] (0xc000741f40) (5) Data frame handling\nI0701 13:31:16.407861 3212 log.go:172] (0xc0000f5130) Data frame received for 1\nI0701 13:31:16.407891 3212 log.go:172] (0xc000741ea0) (1) Data frame handling\nI0701 13:31:16.407922 3212 log.go:172] (0xc000741ea0) (1) Data frame sent\nI0701 13:31:16.407946 3212 log.go:172] (0xc0000f5130) (0xc000741ea0) Stream removed, broadcasting: 1\nI0701 13:31:16.408057 3212 log.go:172] (0xc0000f5130) Go away received\nI0701 13:31:16.408394 3212 log.go:172] (0xc0000f5130) (0xc000741ea0) Stream removed, broadcasting: 1\nI0701 13:31:16.408416 3212 log.go:172] (0xc0000f5130) (0xc0005534a0) Stream removed, broadcasting: 3\nI0701 13:31:16.408429 3212 log.go:172] (0xc0000f5130) (0xc000741f40) Stream removed, broadcasting: 5\n" Jul 1 13:31:16.416: INFO: stdout: "" Jul 1 13:31:16.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3761 execpodh7vp6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.94.84 80' Jul 1 13:31:16.619: INFO: stderr: "I0701 13:31:16.547075 3234 log.go:172] (0xc000be2000) (0xc0006f1ae0) Create stream\nI0701 13:31:16.547159 3234 log.go:172] (0xc000be2000) (0xc0006f1ae0) Stream added, broadcasting: 1\nI0701 13:31:16.549826 3234 log.go:172] (0xc000be2000) Reply frame received for 1\nI0701 13:31:16.549884 3234 log.go:172] (0xc000be2000) (0xc000984000) Create stream\nI0701 13:31:16.549925 3234 log.go:172] (0xc000be2000) (0xc000984000) Stream added, broadcasting: 3\nI0701 13:31:16.550667 3234 log.go:172] (0xc000be2000) Reply frame received for 3\nI0701 13:31:16.550713 3234 log.go:172] (0xc000be2000) (0xc000454000) Create stream\nI0701 13:31:16.550730 3234 log.go:172] (0xc000be2000) (0xc000454000) Stream added, broadcasting: 5\nI0701 13:31:16.551668 3234 log.go:172] (0xc000be2000) Reply frame received for 5\nI0701 13:31:16.611317 3234 log.go:172] (0xc000be2000) Data frame received for 5\nI0701 13:31:16.611352 3234 log.go:172] (0xc000454000) (5) Data frame handling\nI0701 13:31:16.611365 3234 log.go:172] (0xc000454000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.94.84 80\nConnection to 10.96.94.84 80 port [tcp/http] succeeded!\nI0701 13:31:16.611394 3234 log.go:172] (0xc000be2000) Data frame received for 5\nI0701 13:31:16.611411 3234 log.go:172] (0xc000454000) (5) Data frame handling\nI0701 13:31:16.611446 3234 log.go:172] (0xc000be2000) Data frame received for 3\nI0701 13:31:16.611466 3234 log.go:172] (0xc000984000) (3) Data frame handling\nI0701 13:31:16.612879 3234 log.go:172] (0xc000be2000) Data frame received for 1\nI0701 13:31:16.612907 3234 log.go:172] (0xc0006f1ae0) (1) Data frame handling\nI0701 13:31:16.612921 3234 log.go:172] (0xc0006f1ae0) (1) Data frame sent\nI0701 13:31:16.612936 3234 log.go:172] (0xc000be2000) (0xc0006f1ae0) Stream removed, broadcasting: 1\nI0701 13:31:16.612992 3234 log.go:172] (0xc000be2000) Go away received\nI0701 13:31:16.613576 3234 log.go:172] (0xc000be2000) (0xc0006f1ae0) Stream removed, broadcasting: 1\nI0701 13:31:16.613600 3234 log.go:172] (0xc000be2000) (0xc000984000) Stream removed, broadcasting: 3\nI0701 13:31:16.613613 3234 log.go:172] (0xc000be2000) (0xc000454000) Stream removed, broadcasting: 5\n" Jul 1 13:31:16.619: INFO: stdout: "" Jul 1 13:31:16.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3761 execpodh7vp6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30138' Jul 1 13:31:16.852: INFO: stderr: "I0701 13:31:16.754651 3256 log.go:172] (0xc000a831e0) (0xc000a1c500) Create stream\nI0701 13:31:16.754706 3256 log.go:172] (0xc000a831e0) (0xc000a1c500) Stream added, broadcasting: 1\nI0701 13:31:16.759399 3256 log.go:172] (0xc000a831e0) Reply frame received for 1\nI0701 13:31:16.759444 3256 log.go:172] (0xc000a831e0) (0xc000711c20) Create stream\nI0701 13:31:16.759459 3256 log.go:172] (0xc000a831e0) (0xc000711c20) Stream added, broadcasting: 3\nI0701 13:31:16.760211 3256 log.go:172] (0xc000a831e0) Reply frame received for 3\nI0701 13:31:16.760243 3256 log.go:172] (0xc000a831e0) (0xc0006e2820) Create stream\nI0701 13:31:16.760254 3256 log.go:172] (0xc000a831e0) (0xc0006e2820) Stream added, broadcasting: 5\nI0701 13:31:16.761426 3256 log.go:172] (0xc000a831e0) Reply frame received for 5\nI0701 13:31:16.843654 3256 log.go:172] (0xc000a831e0) Data frame received for 3\nI0701 13:31:16.843689 3256 log.go:172] (0xc000711c20) (3) Data frame handling\nI0701 13:31:16.843706 3256 log.go:172] (0xc000a831e0) Data frame received for 5\nI0701 13:31:16.843711 3256 log.go:172] (0xc0006e2820) (5) Data frame handling\nI0701 13:31:16.843718 3256 log.go:172] (0xc0006e2820) (5) Data frame sent\nI0701 13:31:16.843724 3256 log.go:172] (0xc000a831e0) Data frame received for 5\nI0701 13:31:16.843728 3256 log.go:172] (0xc0006e2820) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30138\nConnection to 172.17.0.10 30138 port [tcp/30138] succeeded!\nI0701 13:31:16.845083 3256 log.go:172] (0xc000a831e0) Data frame received for 1\nI0701 13:31:16.845108 3256 log.go:172] (0xc000a1c500) (1) Data frame handling\nI0701 13:31:16.845282 3256 log.go:172] (0xc000a1c500) (1) Data frame sent\nI0701 13:31:16.845298 3256 log.go:172] (0xc000a831e0) (0xc000a1c500) Stream removed, broadcasting: 1\nI0701 13:31:16.845312 3256 log.go:172] (0xc000a831e0) Go away received\nI0701 13:31:16.845552 3256 log.go:172] (0xc000a831e0) (0xc000a1c500) Stream removed, broadcasting: 1\nI0701 13:31:16.845572 3256 log.go:172] (0xc000a831e0) (0xc000711c20) Stream removed, broadcasting: 3\nI0701 13:31:16.845581 3256 log.go:172] (0xc000a831e0) (0xc0006e2820) Stream removed, broadcasting: 5\n" Jul 1 13:31:16.852: INFO: stdout: "" Jul 1 13:31:16.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3761 execpodh7vp6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30138' Jul 1 13:31:17.058: INFO: stderr: "I0701 13:31:16.981552 3277 log.go:172] (0xc000b52160) (0xc000ba00a0) Create stream\nI0701 13:31:16.981612 3277 log.go:172] (0xc000b52160) (0xc000ba00a0) Stream added, broadcasting: 1\nI0701 13:31:16.983559 3277 log.go:172] (0xc000b52160) Reply frame received for 1\nI0701 13:31:16.983631 3277 log.go:172] (0xc000b52160) (0xc000b400a0) Create stream\nI0701 13:31:16.983658 3277 log.go:172] (0xc000b52160) (0xc000b400a0) Stream added, broadcasting: 3\nI0701 13:31:16.984708 3277 log.go:172] (0xc000b52160) Reply frame received for 3\nI0701 13:31:16.984750 3277 log.go:172] (0xc000b52160) (0xc000681d60) Create stream\nI0701 13:31:16.984765 3277 log.go:172] (0xc000b52160) (0xc000681d60) Stream added, broadcasting: 5\nI0701 13:31:16.985933 3277 log.go:172] (0xc000b52160) Reply frame received for 5\nI0701 13:31:17.046623 3277 log.go:172] (0xc000b52160) Data frame received for 3\nI0701 13:31:17.046655 3277 log.go:172] (0xc000b52160) Data frame received for 5\nI0701 13:31:17.046680 3277 log.go:172] (0xc000681d60) (5) Data frame handling\nI0701 13:31:17.046690 3277 log.go:172] (0xc000681d60) (5) Data frame sent\nI0701 13:31:17.046698 3277 log.go:172] (0xc000b52160) Data frame received for 5\nI0701 13:31:17.046705 3277 log.go:172] (0xc000681d60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30138\nConnection to 172.17.0.8 30138 port [tcp/30138] succeeded!\nI0701 13:31:17.046726 3277 log.go:172] (0xc000b400a0) (3) Data frame handling\nI0701 13:31:17.048454 3277 log.go:172] (0xc000b52160) Data frame received for 1\nI0701 13:31:17.048564 3277 log.go:172] (0xc000ba00a0) (1) Data frame handling\nI0701 13:31:17.048586 3277 log.go:172] (0xc000ba00a0) (1) Data frame sent\nI0701 13:31:17.048598 3277 log.go:172] (0xc000b52160) (0xc000ba00a0) Stream removed, broadcasting: 1\nI0701 13:31:17.049410 3277 log.go:172] (0xc000b52160) Go away received\nI0701 13:31:17.050082 3277 log.go:172] (0xc000b52160) (0xc000ba00a0) Stream removed, broadcasting: 1\nI0701 13:31:17.050136 3277 log.go:172] (0xc000b52160) (0xc000b400a0) Stream removed, broadcasting: 3\nI0701 13:31:17.050156 3277 log.go:172] (0xc000b52160) (0xc000681d60) Stream removed, broadcasting: 5\n" Jul 1 13:31:17.058: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:31:17.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3761" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.199 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":200,"skipped":3156,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:31:17.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-b9c5932e-7f12-4608-8f90-9df2c0645ff4 STEP: Creating a pod to test consume secrets Jul 1 13:31:17.174: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e" in namespace "projected-6548" to be "success or failure" Jul 1 13:31:17.190: INFO: Pod "pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.490682ms Jul 1 13:31:19.242: INFO: Pod "pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067996268s Jul 1 13:31:21.246: INFO: Pod "pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e": Phase="Running", Reason="", readiness=true. Elapsed: 4.072637001s Jul 1 13:31:23.250: INFO: Pod "pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076829147s STEP: Saw pod success Jul 1 13:31:23.251: INFO: Pod "pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e" satisfied condition "success or failure" Jul 1 13:31:23.253: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e container projected-secret-volume-test: STEP: delete the pod Jul 1 13:31:23.306: INFO: Waiting for pod pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e to disappear Jul 1 13:31:23.345: INFO: Pod pod-projected-secrets-8835c7a4-3f02-4d6e-a603-48f58f1b9a8e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:31:23.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6548" for this suite. • [SLOW TEST:6.345 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3156,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:31:23.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jul 1 13:31:23.623: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:31:38.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5171" for this suite. • [SLOW TEST:15.566 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":202,"skipped":3160,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:31:38.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jul 1 13:31:39.067: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix381444522/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:31:39.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7347" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":203,"skipped":3162,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:31:39.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 1 13:31:39.227: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 13:31:39.251: INFO: Waiting for terminating namespaces to be deleted... Jul 1 13:31:39.253: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 1 13:31:39.258: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:31:39.258: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:31:39.258: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:31:39.258: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 13:31:39.258: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 1 13:31:39.263: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:31:39.263: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:31:39.263: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jul 1 13:31:39.263: INFO: Container kube-bench ready: false, restart count 0 Jul 1 13:31:39.263: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:31:39.263: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 13:31:39.263: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jul 1 13:31:39.263: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-91466f15-522f-4675-b10f-a0996c9ca5d4 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-91466f15-522f-4675-b10f-a0996c9ca5d4 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-91466f15-522f-4675-b10f-a0996c9ca5d4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:36:47.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1912" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.488 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":204,"skipped":3164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:36:47.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-x4qjv in namespace proxy-6997 I0701 13:36:47.839690 6 runners.go:189] Created replication controller with name: proxy-service-x4qjv, namespace: proxy-6997, replica count: 1 I0701 13:36:48.890155 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:36:49.890394 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:36:50.890606 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:36:51.890812 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 13:36:52.891083 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 13:36:53.891280 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 13:36:54.891469 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 13:36:55.891687 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 13:36:56.891945 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 13:36:57.892231 6 runners.go:189] proxy-service-x4qjv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 13:36:57.896: INFO: setup took 10.126929137s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 1 13:36:57.909: INFO: (0) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 12.795094ms) Jul 1 13:36:57.909: INFO: (0) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 12.742987ms) Jul 1 13:36:57.909: INFO: (0) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 12.868714ms) Jul 1 13:36:57.909: INFO: (0) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 12.742252ms) Jul 1 13:36:57.909: INFO: (0) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 12.717426ms) Jul 1 13:36:57.909: INFO: (0) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 12.759961ms) Jul 1 13:36:57.909: INFO: (0) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 12.95936ms) Jul 1 13:36:57.910: INFO: (0) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 13.800297ms) Jul 1 13:36:57.910: INFO: (0) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 13.729602ms) Jul 1 13:36:57.913: INFO: (0) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 17.129374ms) Jul 1 13:36:57.913: INFO: (0) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 17.133973ms) Jul 1 13:36:57.942: INFO: (0) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 45.943341ms) Jul 1 13:36:57.942: INFO: (0) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 4.953099ms) Jul 1 13:36:57.948: INFO: (1) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 5.01669ms) Jul 1 13:36:57.949: INFO: (1) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 6.450971ms) Jul 1 13:36:57.949: INFO: (1) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 6.507891ms) Jul 1 13:36:57.949: INFO: (1) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 6.433248ms) Jul 1 13:36:57.949: INFO: (1) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 6.605025ms) Jul 1 13:36:57.949: INFO: (1) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 6.830784ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 6.865107ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 6.994294ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 7.18732ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 7.288719ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 7.495197ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 7.484844ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 7.623084ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 7.610707ms) Jul 1 13:36:57.950: INFO: (1) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: ... (200; 4.298638ms) Jul 1 13:36:57.955: INFO: (2) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 4.425905ms) Jul 1 13:36:57.955: INFO: (2) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 4.448747ms) Jul 1 13:36:57.955: INFO: (2) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 4.691987ms) Jul 1 13:36:57.955: INFO: (2) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.640442ms) Jul 1 13:36:57.958: INFO: (2) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 7.154918ms) Jul 1 13:36:57.958: INFO: (2) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 7.855933ms) Jul 1 13:36:57.961: INFO: (2) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 10.18462ms) Jul 1 13:36:57.961: INFO: (2) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 10.804158ms) Jul 1 13:36:57.961: INFO: (2) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 11.00836ms) Jul 1 13:36:57.961: INFO: (2) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 11.026893ms) Jul 1 13:36:57.961: INFO: (2) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 11.053738ms) Jul 1 13:36:57.961: INFO: (2) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 11.087625ms) Jul 1 13:36:57.966: INFO: (3) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test<... (200; 4.649489ms) Jul 1 13:36:57.966: INFO: (3) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 4.784248ms) Jul 1 13:36:57.966: INFO: (3) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 4.871614ms) Jul 1 13:36:57.967: INFO: (3) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 4.989073ms) Jul 1 13:36:57.967: INFO: (3) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.944555ms) Jul 1 13:36:57.967: INFO: (3) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 4.965163ms) Jul 1 13:36:57.967: INFO: (3) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 5.253313ms) Jul 1 13:36:57.967: INFO: (3) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 5.408683ms) Jul 1 13:36:57.967: INFO: (3) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 5.406676ms) Jul 1 13:36:57.967: INFO: (3) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 5.496991ms) Jul 1 13:36:57.968: INFO: (3) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 6.131862ms) Jul 1 13:36:57.968: INFO: (3) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 6.287807ms) Jul 1 13:36:57.968: INFO: (3) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 6.366104ms) Jul 1 13:36:57.968: INFO: (3) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 6.594563ms) Jul 1 13:36:57.968: INFO: (3) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 6.558451ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 3.34894ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.340256ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.581279ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 3.589292ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 3.696711ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.684162ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 3.669704ms) Jul 1 13:36:57.972: INFO: (4) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: ... (200; 3.25923ms) Jul 1 13:36:57.977: INFO: (5) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.400008ms) Jul 1 13:36:57.977: INFO: (5) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 3.60981ms) Jul 1 13:36:57.977: INFO: (5) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.896124ms) Jul 1 13:36:57.977: INFO: (5) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.885893ms) Jul 1 13:36:57.977: INFO: (5) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.914849ms) Jul 1 13:36:57.977: INFO: (5) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 4.014287ms) Jul 1 13:36:57.977: INFO: (5) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 4.071688ms) Jul 1 13:36:57.978: INFO: (5) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 4.045935ms) Jul 1 13:36:57.978: INFO: (5) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 4.60052ms) Jul 1 13:36:57.978: INFO: (5) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 4.668607ms) Jul 1 13:36:57.978: INFO: (5) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 4.790929ms) Jul 1 13:36:57.978: INFO: (5) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 4.942387ms) Jul 1 13:36:57.981: INFO: (6) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 2.64904ms) Jul 1 13:36:57.982: INFO: (6) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 3.70894ms) Jul 1 13:36:57.982: INFO: (6) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 4.028183ms) Jul 1 13:36:57.982: INFO: (6) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 4.830563ms) Jul 1 13:36:57.983: INFO: (6) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.790009ms) Jul 1 13:36:57.983: INFO: (6) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 4.822003ms) Jul 1 13:36:57.987: INFO: (7) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 3.275775ms) Jul 1 13:36:57.987: INFO: (7) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 3.452926ms) Jul 1 13:36:57.987: INFO: (7) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 3.676253ms) Jul 1 13:36:57.987: INFO: (7) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.883897ms) Jul 1 13:36:57.987: INFO: (7) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.952096ms) Jul 1 13:36:57.987: INFO: (7) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 3.992872ms) Jul 1 13:36:57.987: INFO: (7) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.964015ms) Jul 1 13:36:57.988: INFO: (7) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 4.27019ms) Jul 1 13:36:57.988: INFO: (7) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 4.306768ms) Jul 1 13:36:57.988: INFO: (7) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 4.329222ms) Jul 1 13:36:57.988: INFO: (7) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.373148ms) Jul 1 13:36:57.988: INFO: (7) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 4.696508ms) Jul 1 13:36:57.988: INFO: (7) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 4.774603ms) Jul 1 13:36:57.988: INFO: (7) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 4.755626ms) Jul 1 13:36:57.991: INFO: (8) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 2.389422ms) Jul 1 13:36:57.993: INFO: (8) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 4.431265ms) Jul 1 13:36:57.993: INFO: (8) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.625373ms) Jul 1 13:36:57.993: INFO: (8) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.675098ms) Jul 1 13:36:57.993: INFO: (8) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 4.889657ms) Jul 1 13:36:57.993: INFO: (8) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 5.105731ms) Jul 1 13:36:57.994: INFO: (8) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 5.140774ms) Jul 1 13:36:57.994: INFO: (8) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 5.345615ms) Jul 1 13:36:57.994: INFO: (8) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 4.910616ms) Jul 1 13:36:57.999: INFO: (9) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 4.910937ms) Jul 1 13:36:57.999: INFO: (9) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 4.886314ms) Jul 1 13:36:57.999: INFO: (9) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 4.961987ms) Jul 1 13:36:57.999: INFO: (9) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 4.990705ms) Jul 1 13:36:57.999: INFO: (9) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 4.963972ms) Jul 1 13:36:58.000: INFO: (9) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test<... (200; 5.463393ms) Jul 1 13:36:58.000: INFO: (9) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 5.460951ms) Jul 1 13:36:58.000: INFO: (9) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 5.497722ms) Jul 1 13:36:58.000: INFO: (9) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 5.497191ms) Jul 1 13:36:58.001: INFO: (9) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 5.915604ms) Jul 1 13:36:58.003: INFO: (10) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 2.628625ms) Jul 1 13:36:58.003: INFO: (10) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 2.716911ms) Jul 1 13:36:58.003: INFO: (10) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 2.545109ms) Jul 1 13:36:58.003: INFO: (10) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 2.747833ms) Jul 1 13:36:58.004: INFO: (10) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.680177ms) Jul 1 13:36:58.005: INFO: (10) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 3.789426ms) Jul 1 13:36:58.005: INFO: (10) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 3.973208ms) Jul 1 13:36:58.005: INFO: (10) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.802849ms) Jul 1 13:36:58.005: INFO: (10) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.042735ms) Jul 1 13:36:58.005: INFO: (10) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 5.467249ms) Jul 1 13:36:58.011: INFO: (11) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 5.762236ms) Jul 1 13:36:58.011: INFO: (11) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 5.74912ms) Jul 1 13:36:58.011: INFO: (11) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 5.740897ms) Jul 1 13:36:58.011: INFO: (11) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 5.748769ms) Jul 1 13:36:58.011: INFO: (11) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 5.831759ms) Jul 1 13:36:58.012: INFO: (11) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 5.999887ms) Jul 1 13:36:58.012: INFO: (11) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 6.062316ms) Jul 1 13:36:58.012: INFO: (11) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 6.100878ms) Jul 1 13:36:58.012: INFO: (11) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: ... (200; 4.490217ms) Jul 1 13:36:58.017: INFO: (12) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 4.386116ms) Jul 1 13:36:58.017: INFO: (12) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 4.657431ms) Jul 1 13:36:58.017: INFO: (12) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 4.692381ms) Jul 1 13:36:58.017: INFO: (12) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 4.654081ms) Jul 1 13:36:58.017: INFO: (12) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test<... (200; 5.214888ms) Jul 1 13:36:58.018: INFO: (12) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 5.232163ms) Jul 1 13:36:58.018: INFO: (12) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 5.196698ms) Jul 1 13:36:58.018: INFO: (12) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 5.332399ms) Jul 1 13:36:58.018: INFO: (12) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 5.274853ms) Jul 1 13:36:58.018: INFO: (12) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 5.418189ms) Jul 1 13:36:58.018: INFO: (12) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 6.031329ms) Jul 1 13:36:58.022: INFO: (13) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 3.100592ms) Jul 1 13:36:58.022: INFO: (13) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 3.062558ms) Jul 1 13:36:58.023: INFO: (13) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 4.139466ms) Jul 1 13:36:58.023: INFO: (13) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 4.50217ms) Jul 1 13:36:58.024: INFO: (13) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 5.412956ms) Jul 1 13:36:58.024: INFO: (13) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 5.487785ms) Jul 1 13:36:58.024: INFO: (13) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 5.510369ms) Jul 1 13:36:58.024: INFO: (13) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 5.468255ms) Jul 1 13:36:58.024: INFO: (13) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 3.362907ms) Jul 1 13:36:58.028: INFO: (14) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.40604ms) Jul 1 13:36:58.028: INFO: (14) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 3.470339ms) Jul 1 13:36:58.028: INFO: (14) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 3.447171ms) Jul 1 13:36:58.028: INFO: (14) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.439619ms) Jul 1 13:36:58.028: INFO: (14) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 3.617378ms) Jul 1 13:36:58.028: INFO: (14) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 3.796912ms) Jul 1 13:36:58.028: INFO: (14) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.847778ms) Jul 1 13:36:58.030: INFO: (14) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 4.901498ms) Jul 1 13:36:58.030: INFO: (14) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 4.870456ms) Jul 1 13:36:58.030: INFO: (14) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 4.888823ms) Jul 1 13:36:58.030: INFO: (14) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 5.383444ms) Jul 1 13:36:58.030: INFO: (14) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 5.352158ms) Jul 1 13:36:58.030: INFO: (14) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 5.171916ms) Jul 1 13:36:58.032: INFO: (15) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 1.947487ms) Jul 1 13:36:58.032: INFO: (15) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 2.022715ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: ... (200; 10.079777ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 10.101266ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 10.057287ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 10.163436ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 10.086061ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 10.208695ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 10.214921ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 10.168ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 10.301175ms) Jul 1 13:36:58.040: INFO: (15) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 10.479241ms) Jul 1 13:36:58.044: INFO: (16) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 3.411394ms) Jul 1 13:36:58.044: INFO: (16) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 3.25106ms) Jul 1 13:36:58.044: INFO: (16) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.59499ms) Jul 1 13:36:58.044: INFO: (16) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.572236ms) Jul 1 13:36:58.045: INFO: (16) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 4.411291ms) Jul 1 13:36:58.045: INFO: (16) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test<... (200; 3.215304ms) Jul 1 13:36:58.051: INFO: (17) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 3.540521ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.694327ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 3.919209ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 3.83112ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test (200; 4.403434ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 4.477284ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 4.326713ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 4.215981ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 4.261727ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 4.357225ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 4.434689ms) Jul 1 13:36:58.052: INFO: (17) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 4.397373ms) Jul 1 13:36:58.054: INFO: (18) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 1.788976ms) Jul 1 13:36:58.054: INFO: (18) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 2.095977ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.249367ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 3.229907ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 3.516872ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 3.603339ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.588ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 3.65097ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:1080/proxy/: test<... (200; 3.587975ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.79935ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 3.846124ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 3.897344ms) Jul 1 13:36:58.056: INFO: (18) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:443/proxy/: test<... (200; 2.768209ms) Jul 1 13:36:58.060: INFO: (19) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69/proxy/: test (200; 3.132029ms) Jul 1 13:36:58.060: INFO: (19) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 3.102529ms) Jul 1 13:36:58.060: INFO: (19) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:462/proxy/: tls qux (200; 3.11774ms) Jul 1 13:36:58.060: INFO: (19) /api/v1/namespaces/proxy-6997/pods/http:proxy-service-x4qjv-gbs69:1080/proxy/: ... (200; 2.972291ms) Jul 1 13:36:58.060: INFO: (19) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname2/proxy/: tls qux (200; 3.222463ms) Jul 1 13:36:58.061: INFO: (19) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:162/proxy/: bar (200; 4.575114ms) Jul 1 13:36:58.061: INFO: (19) /api/v1/namespaces/proxy-6997/pods/proxy-service-x4qjv-gbs69:160/proxy/: foo (200; 4.585084ms) Jul 1 13:36:58.061: INFO: (19) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname2/proxy/: bar (200; 4.56892ms) Jul 1 13:36:58.061: INFO: (19) /api/v1/namespaces/proxy-6997/pods/https:proxy-service-x4qjv-gbs69:460/proxy/: tls baz (200; 4.587103ms) Jul 1 13:36:58.062: INFO: (19) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname1/proxy/: foo (200; 5.075357ms) Jul 1 13:36:58.062: INFO: (19) /api/v1/namespaces/proxy-6997/services/https:proxy-service-x4qjv:tlsportname1/proxy/: tls baz (200; 4.884453ms) Jul 1 13:36:58.062: INFO: (19) /api/v1/namespaces/proxy-6997/services/http:proxy-service-x4qjv:portname2/proxy/: bar (200; 5.145362ms) Jul 1 13:36:58.062: INFO: (19) /api/v1/namespaces/proxy-6997/services/proxy-service-x4qjv:portname1/proxy/: foo (200; 5.175089ms) STEP: deleting ReplicationController proxy-service-x4qjv in namespace proxy-6997, will wait for the garbage collector to delete the pods Jul 1 13:36:58.126: INFO: Deleting ReplicationController proxy-service-x4qjv took: 12.418252ms Jul 1 13:36:58.426: INFO: Terminating ReplicationController proxy-service-x4qjv pods took: 300.226134ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:09.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6997" for this suite. • [SLOW TEST:21.872 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":205,"skipped":3191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:09.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 1 13:37:09.623: INFO: Waiting up to 5m0s for pod "pod-55728996-0637-425c-a61b-677c1f2a17b0" in namespace "emptydir-2305" to be "success or failure" Jul 1 13:37:09.667: INFO: Pod "pod-55728996-0637-425c-a61b-677c1f2a17b0": Phase="Pending", Reason="", readiness=false. Elapsed: 43.363388ms Jul 1 13:37:11.732: INFO: Pod "pod-55728996-0637-425c-a61b-677c1f2a17b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108755006s Jul 1 13:37:13.840: INFO: Pod "pod-55728996-0637-425c-a61b-677c1f2a17b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216575938s STEP: Saw pod success Jul 1 13:37:13.840: INFO: Pod "pod-55728996-0637-425c-a61b-677c1f2a17b0" satisfied condition "success or failure" Jul 1 13:37:13.843: INFO: Trying to get logs from node jerma-worker2 pod pod-55728996-0637-425c-a61b-677c1f2a17b0 container test-container: STEP: delete the pod Jul 1 13:37:14.187: INFO: Waiting for pod pod-55728996-0637-425c-a61b-677c1f2a17b0 to disappear Jul 1 13:37:14.190: INFO: Pod pod-55728996-0637-425c-a61b-677c1f2a17b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:14.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2305" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3253,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:14.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 13:37:22.678: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 13:37:22.689: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 13:37:24.689: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 13:37:24.694: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 13:37:26.689: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 13:37:26.694: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 13:37:28.689: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 13:37:28.694: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 13:37:30.689: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 13:37:30.694: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:30.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3586" for this suite. • [SLOW TEST:16.524 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3349,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:30.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:37:31.691: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:37:33.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207451, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207451, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207451, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207451, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:37:36.742: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:36.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1428" for this suite. STEP: Destroying namespace "webhook-1428-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.121 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":208,"skipped":3357,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:37.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f8f1ea26-5d38-4dd7-8af6-f2aec271a5ff STEP: Creating a pod to test consume configMaps Jul 1 13:37:38.425: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9" in namespace "projected-9483" to be "success or failure" Jul 1 13:37:38.587: INFO: Pod "pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9": Phase="Pending", Reason="", readiness=false. Elapsed: 161.575383ms Jul 1 13:37:40.599: INFO: Pod "pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173300347s Jul 1 13:37:42.618: INFO: Pod "pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.193155381s STEP: Saw pod success Jul 1 13:37:42.619: INFO: Pod "pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9" satisfied condition "success or failure" Jul 1 13:37:42.621: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9 container projected-configmap-volume-test: STEP: delete the pod Jul 1 13:37:42.671: INFO: Waiting for pod pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9 to disappear Jul 1 13:37:42.706: INFO: Pod pod-projected-configmaps-527e7c69-080d-416e-9c11-b2992c7a51c9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:42.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9483" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3367,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:42.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:37:44.008: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:37:46.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207464, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207464, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207464, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207463, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:37:49.076: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:49.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8886" for this suite. STEP: Destroying namespace "webhook-8886-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.662 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":210,"skipped":3375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:49.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:53.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4317" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3399,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:53.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:37:53.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6" in namespace "downward-api-1471" to be "success or failure" Jul 1 13:37:53.613: INFO: Pod "downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.204878ms Jul 1 13:37:55.618: INFO: Pod "downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027939475s Jul 1 13:37:57.621: INFO: Pod "downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031607992s Jul 1 13:37:59.626: INFO: Pod "downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036110889s STEP: Saw pod success Jul 1 13:37:59.626: INFO: Pod "downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6" satisfied condition "success or failure" Jul 1 13:37:59.630: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6 container client-container: STEP: delete the pod Jul 1 13:37:59.759: INFO: Waiting for pod downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6 to disappear Jul 1 13:37:59.768: INFO: Pod downwardapi-volume-a18ece8f-179f-4a3e-ac09-aa01660112d6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:59.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1471" for this suite. • [SLOW TEST:6.266 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3402,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:59.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:37:59.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3134" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":213,"skipped":3405,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:37:59.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 1 13:38:05.067: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:38:05.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-956" for this suite. • [SLOW TEST:6.584 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":214,"skipped":3411,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:38:06.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:38:06.788: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.532211ms) Jul 1 13:38:06.833: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 45.586189ms) Jul 1 13:38:06.837: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.202372ms) Jul 1 13:38:06.846: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 9.077672ms) Jul 1 13:38:06.851: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.631796ms) Jul 1 13:38:06.857: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.827466ms) Jul 1 13:38:06.860: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.067095ms) Jul 1 13:38:06.863: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.809322ms) Jul 1 13:38:06.866: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.89695ms) Jul 1 13:38:06.869: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.136387ms) Jul 1 13:38:06.872: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.048492ms) Jul 1 13:38:06.875: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.90834ms) Jul 1 13:38:06.878: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.407881ms) Jul 1 13:38:06.880: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.616298ms) Jul 1 13:38:06.901: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.160518ms) Jul 1 13:38:06.904: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.735289ms) Jul 1 13:38:06.908: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.409832ms) Jul 1 13:38:06.911: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.478645ms) Jul 1 13:38:06.914: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.089446ms) Jul 1 13:38:06.918: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.183426ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:38:06.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7732" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":215,"skipped":3418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:38:06.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5a717ac6-7ddf-434f-a8d2-d1488a16d5f3 STEP: Creating a pod to test consume configMaps Jul 1 13:38:07.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93" in namespace "configmap-170" to be "success or failure" Jul 1 13:38:07.064: INFO: Pod "pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93": Phase="Pending", Reason="", readiness=false. Elapsed: 41.802727ms Jul 1 13:38:09.164: INFO: Pod "pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142359852s Jul 1 13:38:11.169: INFO: Pod "pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146771807s Jul 1 13:38:13.173: INFO: Pod "pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151099562s STEP: Saw pod success Jul 1 13:38:13.173: INFO: Pod "pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93" satisfied condition "success or failure" Jul 1 13:38:13.176: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93 container configmap-volume-test: STEP: delete the pod Jul 1 13:38:13.244: INFO: Waiting for pod pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93 to disappear Jul 1 13:38:13.252: INFO: Pod pod-configmaps-86bbe914-b45c-4520-b64a-c320d0236e93 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:38:13.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-170" for this suite. • [SLOW TEST:6.334 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3448,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:38:13.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jul 1 13:38:13.382: INFO: Waiting up to 5m0s for pod "var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e" in namespace "var-expansion-3278" to be "success or failure" Jul 1 13:38:13.408: INFO: Pod "var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.822279ms Jul 1 13:38:15.459: INFO: Pod "var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076906655s Jul 1 13:38:17.463: INFO: Pod "var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081007136s STEP: Saw pod success Jul 1 13:38:17.463: INFO: Pod "var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e" satisfied condition "success or failure" Jul 1 13:38:17.466: INFO: Trying to get logs from node jerma-worker pod var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e container dapi-container: STEP: delete the pod Jul 1 13:38:17.496: INFO: Waiting for pod var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e to disappear Jul 1 13:38:17.559: INFO: Pod var-expansion-486c09bd-6b88-46ca-b801-bd4126031a6e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:38:17.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3278" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3455,"failed":0} SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:38:17.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-4a0f34ec-e61b-4e58-a68d-8dd48dda87dc STEP: Creating configMap with name cm-test-opt-upd-fbf99f52-9593-4868-bad0-b35c2f5e1a0f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4a0f34ec-e61b-4e58-a68d-8dd48dda87dc STEP: Updating configmap cm-test-opt-upd-fbf99f52-9593-4868-bad0-b35c2f5e1a0f STEP: Creating configMap with name cm-test-opt-create-889dba7a-9383-4f33-b9f6-8e3d415d2fe6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:39:29.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7347" for this suite. • [SLOW TEST:71.561 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3457,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:39:29.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-75e09f25-0ab7-486f-bed4-414e6557e152 STEP: Creating a pod to test consume secrets Jul 1 13:39:29.217: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f" in namespace "projected-3101" to be "success or failure" Jul 1 13:39:29.234: INFO: Pod "pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.600858ms Jul 1 13:39:31.273: INFO: Pod "pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055997619s Jul 1 13:39:33.277: INFO: Pod "pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059535521s STEP: Saw pod success Jul 1 13:39:33.277: INFO: Pod "pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f" satisfied condition "success or failure" Jul 1 13:39:33.279: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f container projected-secret-volume-test: STEP: delete the pod Jul 1 13:39:33.327: INFO: Waiting for pod pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f to disappear Jul 1 13:39:33.392: INFO: Pod pod-projected-secrets-edbb39c4-dc75-42b5-bde0-089b278dbf3f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:39:33.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3101" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:39:33.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:39:33.583: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 1 13:39:33.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:33.643: INFO: Number of nodes with available pods: 0 Jul 1 13:39:33.643: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:34.790: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:34.792: INFO: Number of nodes with available pods: 0 Jul 1 13:39:34.792: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:36.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:36.170: INFO: Number of nodes with available pods: 0 Jul 1 13:39:36.170: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:36.705: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:36.709: INFO: Number of nodes with available pods: 0 Jul 1 13:39:36.709: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:37.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:37.692: INFO: Number of nodes with available pods: 0 Jul 1 13:39:37.692: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:38.838: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:38.891: INFO: Number of nodes with available pods: 1 Jul 1 13:39:38.891: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:39.670: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:39.674: INFO: Number of nodes with available pods: 2 Jul 1 13:39:39.674: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 1 13:39:39.908: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:39.908: INFO: Wrong image for pod: daemon-set-w47dx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:39.950: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:40.955: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:40.955: INFO: Wrong image for pod: daemon-set-w47dx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:40.960: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:41.992: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:41.992: INFO: Wrong image for pod: daemon-set-w47dx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:41.996: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:42.955: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:42.956: INFO: Wrong image for pod: daemon-set-w47dx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:42.960: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:43.955: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:43.955: INFO: Wrong image for pod: daemon-set-w47dx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:43.955: INFO: Pod daemon-set-w47dx is not available Jul 1 13:39:43.959: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:44.956: INFO: Pod daemon-set-pd7w4 is not available Jul 1 13:39:44.956: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:44.960: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:45.958: INFO: Pod daemon-set-pd7w4 is not available Jul 1 13:39:45.958: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:45.960: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:46.989: INFO: Pod daemon-set-pd7w4 is not available Jul 1 13:39:46.989: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:47.053: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:47.956: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:47.960: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:49.028: INFO: Wrong image for pod: daemon-set-r5sjh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 13:39:49.032: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:49.982: INFO: Pod daemon-set-lcsk6 is not available Jul 1 13:39:50.316: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:50.954: INFO: Pod daemon-set-lcsk6 is not available Jul 1 13:39:50.958: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 1 13:39:50.961: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:50.963: INFO: Number of nodes with available pods: 1 Jul 1 13:39:50.963: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:51.968: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:51.971: INFO: Number of nodes with available pods: 1 Jul 1 13:39:51.971: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:52.968: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:52.970: INFO: Number of nodes with available pods: 1 Jul 1 13:39:52.970: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:39:53.968: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 13:39:53.971: INFO: Number of nodes with available pods: 2 Jul 1 13:39:53.971: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1408, will wait for the garbage collector to delete the pods Jul 1 13:39:54.060: INFO: Deleting DaemonSet.extensions daemon-set took: 22.161414ms Jul 1 13:39:54.361: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.264702ms Jul 1 13:39:59.633: INFO: Number of nodes with available pods: 0 Jul 1 13:39:59.633: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 13:39:59.636: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1408/daemonsets","resourceVersion":"28794117"},"items":null} Jul 1 13:39:59.639: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1408/pods","resourceVersion":"28794117"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:39:59.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1408" for this suite. • [SLOW TEST:26.232 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":220,"skipped":3508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:39:59.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jul 1 13:40:05.851: INFO: Pod pod-hostip-691901aa-82d0-44d6-94bd-841ee7647f4b has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:40:05.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8468" for this suite. • [SLOW TEST:6.203 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:40:05.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:40:06.905: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:40:08.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207606, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207606, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207607, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207606, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:40:12.053: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:40:12.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2793" for this suite. STEP: Destroying namespace "webhook-2793-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.538 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":222,"skipped":3564,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:40:12.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:40:12.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8433' Jul 1 13:40:17.201: INFO: stderr: "" Jul 1 13:40:17.201: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jul 1 13:40:17.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8433' Jul 1 13:40:17.584: INFO: stderr: "" Jul 1 13:40:17.584: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 1 13:40:18.603: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:40:18.603: INFO: Found 0 / 1 Jul 1 13:40:19.589: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:40:19.589: INFO: Found 0 / 1 Jul 1 13:40:20.590: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:40:20.590: INFO: Found 1 / 1 Jul 1 13:40:20.590: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 13:40:20.593: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 13:40:20.593: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 13:40:20.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-lls82 --namespace=kubectl-8433' Jul 1 13:40:20.717: INFO: stderr: "" Jul 1 13:40:20.717: INFO: stdout: "Name: agnhost-master-lls82\nNamespace: kubectl-8433\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Wed, 01 Jul 2020 13:40:17 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.95\nIPs:\n IP: 10.244.2.95\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://b5b098236d95a15642eb3552f9ee3af4cf0c63ebf3458f0675e279d84385c6ca\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 01 Jul 2020 13:40:19 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-5lqpp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-5lqpp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-5lqpp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-8433/agnhost-master-lls82 to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Jul 1 13:40:20.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8433' Jul 1 13:40:20.841: INFO: stderr: "" Jul 1 13:40:20.841: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8433\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-lls82\n" Jul 1 13:40:20.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8433' Jul 1 13:40:20.951: INFO: stderr: "" Jul 1 13:40:20.951: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8433\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.100.249.182\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.95:6379\nSession Affinity: None\nEvents: \n" Jul 1 13:40:20.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Jul 1 13:40:21.081: INFO: stderr: "" Jul 1 13:40:21.081: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 01 Jul 2020 13:40:14 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 01 Jul 2020 13:38:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 01 Jul 2020 13:38:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 01 Jul 2020 13:38:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 01 Jul 2020 13:38:27 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 107d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 107d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 107d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 107d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 107d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 107d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 1 13:40:21.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8433' Jul 1 13:40:21.180: INFO: stderr: "" Jul 1 13:40:21.180: INFO: stdout: "Name: kubectl-8433\nLabels: e2e-framework=kubectl\n e2e-run=b516189c-e2f4-41a0-94e6-e7a4b8058bb4\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:40:21.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8433" for this suite. • [SLOW TEST:8.788 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":223,"skipped":3583,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:40:21.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 1 13:40:21.356: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9682 /api/v1/namespaces/watch-9682/configmaps/e2e-watch-test-resource-version e86db059-510c-4402-b670-d0ea729996fe 28794326 0 2020-07-01 13:40:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 13:40:21.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9682 /api/v1/namespaces/watch-9682/configmaps/e2e-watch-test-resource-version e86db059-510c-4402-b670-d0ea729996fe 28794327 0 2020-07-01 13:40:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:40:21.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9682" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":224,"skipped":3592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:40:21.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:40:21.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 1 13:40:21.586: INFO: stderr: "" Jul 1 13:40:21.586: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-07-01T11:42:38Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:40:21.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8953" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":225,"skipped":3622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:40:21.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jul 1 13:40:21.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-409' Jul 1 13:40:21.966: INFO: stderr: "" Jul 1 13:40:21.966: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 13:40:21.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:22.080: INFO: stderr: "" Jul 1 13:40:22.080: INFO: stdout: "update-demo-nautilus-2969r update-demo-nautilus-ccv8j " Jul 1 13:40:22.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:22.339: INFO: stderr: "" Jul 1 13:40:22.339: INFO: stdout: "" Jul 1 13:40:22.339: INFO: update-demo-nautilus-2969r is created but not running Jul 1 13:40:27.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:27.452: INFO: stderr: "" Jul 1 13:40:27.453: INFO: stdout: "update-demo-nautilus-2969r update-demo-nautilus-ccv8j " Jul 1 13:40:27.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:27.566: INFO: stderr: "" Jul 1 13:40:27.566: INFO: stdout: "true" Jul 1 13:40:27.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:27.659: INFO: stderr: "" Jul 1 13:40:27.659: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:40:27.659: INFO: validating pod update-demo-nautilus-2969r Jul 1 13:40:27.672: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:40:27.672: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:40:27.672: INFO: update-demo-nautilus-2969r is verified up and running Jul 1 13:40:27.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccv8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:27.763: INFO: stderr: "" Jul 1 13:40:27.763: INFO: stdout: "true" Jul 1 13:40:27.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccv8j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:27.855: INFO: stderr: "" Jul 1 13:40:27.855: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:40:27.855: INFO: validating pod update-demo-nautilus-ccv8j Jul 1 13:40:27.911: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:40:27.911: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:40:27.911: INFO: update-demo-nautilus-ccv8j is verified up and running STEP: scaling down the replication controller Jul 1 13:40:27.914: INFO: scanned /root for discovery docs: Jul 1 13:40:27.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-409' Jul 1 13:40:29.056: INFO: stderr: "" Jul 1 13:40:29.056: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 13:40:29.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:29.197: INFO: stderr: "" Jul 1 13:40:29.197: INFO: stdout: "update-demo-nautilus-2969r update-demo-nautilus-ccv8j " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 13:40:34.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:34.302: INFO: stderr: "" Jul 1 13:40:34.302: INFO: stdout: "update-demo-nautilus-2969r update-demo-nautilus-ccv8j " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 13:40:39.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:39.385: INFO: stderr: "" Jul 1 13:40:39.385: INFO: stdout: "update-demo-nautilus-2969r update-demo-nautilus-ccv8j " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 13:40:44.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:44.485: INFO: stderr: "" Jul 1 13:40:44.485: INFO: stdout: "update-demo-nautilus-2969r " Jul 1 13:40:44.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:44.581: INFO: stderr: "" Jul 1 13:40:44.581: INFO: stdout: "true" Jul 1 13:40:44.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:44.688: INFO: stderr: "" Jul 1 13:40:44.688: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:40:44.688: INFO: validating pod update-demo-nautilus-2969r Jul 1 13:40:44.691: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:40:44.691: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:40:44.691: INFO: update-demo-nautilus-2969r is verified up and running STEP: scaling up the replication controller Jul 1 13:40:44.692: INFO: scanned /root for discovery docs: Jul 1 13:40:44.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-409' Jul 1 13:40:45.796: INFO: stderr: "" Jul 1 13:40:45.796: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 13:40:45.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:45.889: INFO: stderr: "" Jul 1 13:40:45.889: INFO: stdout: "update-demo-nautilus-2969r update-demo-nautilus-75k5p " Jul 1 13:40:45.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:45.980: INFO: stderr: "" Jul 1 13:40:45.980: INFO: stdout: "true" Jul 1 13:40:45.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:46.071: INFO: stderr: "" Jul 1 13:40:46.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:40:46.071: INFO: validating pod update-demo-nautilus-2969r Jul 1 13:40:46.074: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:40:46.074: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:40:46.074: INFO: update-demo-nautilus-2969r is verified up and running Jul 1 13:40:46.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75k5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:46.223: INFO: stderr: "" Jul 1 13:40:46.223: INFO: stdout: "" Jul 1 13:40:46.223: INFO: update-demo-nautilus-75k5p is created but not running Jul 1 13:40:51.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-409' Jul 1 13:40:51.326: INFO: stderr: "" Jul 1 13:40:51.326: INFO: stdout: "update-demo-nautilus-2969r update-demo-nautilus-75k5p " Jul 1 13:40:51.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:51.425: INFO: stderr: "" Jul 1 13:40:51.425: INFO: stdout: "true" Jul 1 13:40:51.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2969r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:51.521: INFO: stderr: "" Jul 1 13:40:51.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:40:51.522: INFO: validating pod update-demo-nautilus-2969r Jul 1 13:40:51.525: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:40:51.525: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:40:51.525: INFO: update-demo-nautilus-2969r is verified up and running Jul 1 13:40:51.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75k5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:51.616: INFO: stderr: "" Jul 1 13:40:51.616: INFO: stdout: "true" Jul 1 13:40:51.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75k5p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-409' Jul 1 13:40:51.716: INFO: stderr: "" Jul 1 13:40:51.716: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:40:51.716: INFO: validating pod update-demo-nautilus-75k5p Jul 1 13:40:51.721: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:40:51.721: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:40:51.721: INFO: update-demo-nautilus-75k5p is verified up and running STEP: using delete to clean up resources Jul 1 13:40:51.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-409' Jul 1 13:40:51.813: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 13:40:51.813: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 1 13:40:51.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-409' Jul 1 13:40:51.911: INFO: stderr: "No resources found in kubectl-409 namespace.\n" Jul 1 13:40:51.911: INFO: stdout: "" Jul 1 13:40:51.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-409 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 13:40:52.019: INFO: stderr: "" Jul 1 13:40:52.019: INFO: stdout: "update-demo-nautilus-2969r\nupdate-demo-nautilus-75k5p\n" Jul 1 13:40:52.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-409' Jul 1 13:40:52.617: INFO: stderr: "No resources found in kubectl-409 namespace.\n" Jul 1 13:40:52.617: INFO: stdout: "" Jul 1 13:40:52.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-409 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 13:40:52.721: INFO: stderr: "" Jul 1 13:40:52.721: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:40:52.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-409" for this suite. • [SLOW TEST:31.128 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":226,"skipped":3672,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:40:52.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:41:04.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1773" for this suite. • [SLOW TEST:11.712 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":227,"skipped":3674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:41:04.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:41:22.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1533" for this suite. • [SLOW TEST:18.104 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":228,"skipped":3699,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:41:22.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:41:23.222: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:41:25.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207683, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207683, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207683, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207683, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:41:28.443: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:41:30.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-602" for this suite. STEP: Destroying namespace "webhook-602-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.248 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":229,"skipped":3703,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:41:30.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:41:35.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1934" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3707,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:41:35.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0701 13:42:05.658831 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 13:42:05.658: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:42:05.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5109" for this suite. • [SLOW TEST:30.386 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":231,"skipped":3707,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:42:05.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:42:06.852: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:42:08.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207727, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207727, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207727, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207726, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:42:11.967: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jul 1 13:42:11.987: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:42:12.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4848" for this suite. STEP: Destroying namespace "webhook-4848-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.134 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":232,"skipped":3707,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:42:12.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-751be863-a8f6-4d56-ac75-1c589912593c in namespace container-probe-8320 Jul 1 13:42:17.215: INFO: Started pod liveness-751be863-a8f6-4d56-ac75-1c589912593c in namespace container-probe-8320 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 13:42:17.218: INFO: Initial restart count of pod liveness-751be863-a8f6-4d56-ac75-1c589912593c is 0 Jul 1 13:42:35.344: INFO: Restart count of pod container-probe-8320/liveness-751be863-a8f6-4d56-ac75-1c589912593c is now 1 (18.126509278s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:42:35.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8320" for this suite. • [SLOW TEST:22.696 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3764,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:42:35.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8496 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8496 STEP: creating replication controller externalsvc in namespace services-8496 I0701 13:42:37.162601 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8496, replica count: 2 I0701 13:42:40.212979 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:42:43.213441 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:42:46.213676 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 13:42:49.214162 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jul 1 13:42:49.309: INFO: Creating new exec pod Jul 1 13:42:53.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8496 execpodb9fpc -- /bin/sh -x -c nslookup nodeport-service' Jul 1 13:42:53.710: INFO: stderr: "I0701 13:42:53.496297 4110 log.go:172] (0xc000538d10) (0xc000a8c000) Create stream\nI0701 13:42:53.496379 4110 log.go:172] (0xc000538d10) (0xc000a8c000) Stream added, broadcasting: 1\nI0701 13:42:53.503965 4110 log.go:172] (0xc000538d10) Reply frame received for 1\nI0701 13:42:53.504025 4110 log.go:172] (0xc000538d10) (0xc0006af9a0) Create stream\nI0701 13:42:53.504040 4110 log.go:172] (0xc000538d10) (0xc0006af9a0) Stream added, broadcasting: 3\nI0701 13:42:53.506521 4110 log.go:172] (0xc000538d10) Reply frame received for 3\nI0701 13:42:53.506588 4110 log.go:172] (0xc000538d10) (0xc0006afb80) Create stream\nI0701 13:42:53.506601 4110 log.go:172] (0xc000538d10) (0xc0006afb80) Stream added, broadcasting: 5\nI0701 13:42:53.507786 4110 log.go:172] (0xc000538d10) Reply frame received for 5\nI0701 13:42:53.606572 4110 log.go:172] (0xc000538d10) Data frame received for 5\nI0701 13:42:53.606609 4110 log.go:172] (0xc0006afb80) (5) Data frame handling\nI0701 13:42:53.606627 4110 log.go:172] (0xc0006afb80) (5) Data frame sent\n+ nslookup nodeport-service\nI0701 13:42:53.696593 4110 log.go:172] (0xc000538d10) Data frame received for 3\nI0701 13:42:53.696616 4110 log.go:172] (0xc0006af9a0) (3) Data frame handling\nI0701 13:42:53.696627 4110 log.go:172] (0xc0006af9a0) (3) Data frame sent\nI0701 13:42:53.698843 4110 log.go:172] (0xc000538d10) Data frame received for 3\nI0701 13:42:53.698857 4110 log.go:172] (0xc0006af9a0) (3) Data frame handling\nI0701 13:42:53.698863 4110 log.go:172] (0xc0006af9a0) (3) Data frame sent\nI0701 13:42:53.699476 4110 log.go:172] (0xc000538d10) Data frame received for 5\nI0701 13:42:53.699513 4110 log.go:172] (0xc0006afb80) (5) Data frame handling\nI0701 13:42:53.700329 4110 log.go:172] (0xc000538d10) Data frame received for 3\nI0701 13:42:53.700346 4110 log.go:172] (0xc0006af9a0) (3) Data frame handling\nI0701 13:42:53.701925 4110 log.go:172] (0xc000538d10) Data frame received for 1\nI0701 13:42:53.701937 4110 log.go:172] (0xc000a8c000) (1) Data frame handling\nI0701 13:42:53.701944 4110 log.go:172] (0xc000a8c000) (1) Data frame sent\nI0701 13:42:53.701956 4110 log.go:172] (0xc000538d10) (0xc000a8c000) Stream removed, broadcasting: 1\nI0701 13:42:53.702097 4110 log.go:172] (0xc000538d10) Go away received\nI0701 13:42:53.702240 4110 log.go:172] (0xc000538d10) (0xc000a8c000) Stream removed, broadcasting: 1\nI0701 13:42:53.702253 4110 log.go:172] (0xc000538d10) (0xc0006af9a0) Stream removed, broadcasting: 3\nI0701 13:42:53.702258 4110 log.go:172] (0xc000538d10) (0xc0006afb80) Stream removed, broadcasting: 5\n" Jul 1 13:42:53.710: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8496.svc.cluster.local\tcanonical name = externalsvc.services-8496.svc.cluster.local.\nName:\texternalsvc.services-8496.svc.cluster.local\nAddress: 10.108.181.109\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8496, will wait for the garbage collector to delete the pods Jul 1 13:42:53.771: INFO: Deleting ReplicationController externalsvc took: 6.548038ms Jul 1 13:42:54.071: INFO: Terminating ReplicationController externalsvc pods took: 300.290933ms Jul 1 13:43:09.628: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:43:09.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8496" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:34.154 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":234,"skipped":3771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:43:09.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-dd11c8c4-f76d-4b3e-8351-046798330e15 STEP: Creating a pod to test consume configMaps Jul 1 13:43:09.754: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2" in namespace "projected-6995" to be "success or failure" Jul 1 13:43:09.757: INFO: Pod "pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.145042ms Jul 1 13:43:11.828: INFO: Pod "pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073848193s Jul 1 13:43:13.832: INFO: Pod "pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078777417s STEP: Saw pod success Jul 1 13:43:13.833: INFO: Pod "pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2" satisfied condition "success or failure" Jul 1 13:43:13.836: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2 container projected-configmap-volume-test: STEP: delete the pod Jul 1 13:43:13.942: INFO: Waiting for pod pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2 to disappear Jul 1 13:43:13.950: INFO: Pod pod-projected-configmaps-3303f61b-8006-4dd8-8140-4b19c9d911c2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:43:13.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6995" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3816,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:43:13.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:43:18.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7905" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:43:18.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:43:18.175: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 1 13:43:18.203: INFO: Number of nodes with available pods: 0 Jul 1 13:43:18.203: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 1 13:43:18.501: INFO: Number of nodes with available pods: 0 Jul 1 13:43:18.501: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:19.506: INFO: Number of nodes with available pods: 0 Jul 1 13:43:19.506: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:20.505: INFO: Number of nodes with available pods: 0 Jul 1 13:43:20.505: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:21.506: INFO: Number of nodes with available pods: 0 Jul 1 13:43:21.506: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:22.506: INFO: Number of nodes with available pods: 1 Jul 1 13:43:22.506: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 1 13:43:22.630: INFO: Number of nodes with available pods: 1 Jul 1 13:43:22.630: INFO: Number of running nodes: 0, number of available pods: 1 Jul 1 13:43:23.846: INFO: Number of nodes with available pods: 0 Jul 1 13:43:23.846: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 1 13:43:24.142: INFO: Number of nodes with available pods: 0 Jul 1 13:43:24.142: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:25.367: INFO: Number of nodes with available pods: 0 Jul 1 13:43:25.368: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:26.214: INFO: Number of nodes with available pods: 0 Jul 1 13:43:26.214: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:27.146: INFO: Number of nodes with available pods: 0 Jul 1 13:43:27.146: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:28.146: INFO: Number of nodes with available pods: 0 Jul 1 13:43:28.146: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:29.146: INFO: Number of nodes with available pods: 0 Jul 1 13:43:29.146: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:30.146: INFO: Number of nodes with available pods: 0 Jul 1 13:43:30.146: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:31.146: INFO: Number of nodes with available pods: 0 Jul 1 13:43:31.146: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:32.146: INFO: Number of nodes with available pods: 0 Jul 1 13:43:32.146: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:33.146: INFO: Number of nodes with available pods: 0 Jul 1 13:43:33.146: INFO: Node jerma-worker is running more than one daemon pod Jul 1 13:43:34.146: INFO: Number of nodes with available pods: 1 Jul 1 13:43:34.146: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2938, will wait for the garbage collector to delete the pods Jul 1 13:43:34.216: INFO: Deleting DaemonSet.extensions daemon-set took: 10.997179ms Jul 1 13:43:34.516: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.345854ms Jul 1 13:43:49.320: INFO: Number of nodes with available pods: 0 Jul 1 13:43:49.320: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 13:43:49.323: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2938/daemonsets","resourceVersion":"28795612"},"items":null} Jul 1 13:43:49.326: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2938/pods","resourceVersion":"28795612"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:43:49.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2938" for this suite. • [SLOW TEST:31.288 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":237,"skipped":3844,"failed":0} [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:43:49.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jul 1 13:43:49.445: INFO: Created pod &Pod{ObjectMeta:{dns-295 dns-295 /api/v1/namespaces/dns-295/pods/dns-295 bbb6e01f-6922-427d-9868-c60931efdfc8 28795619 0 2020-07-01 13:43:49 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-258gf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-258gf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-258gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jul 1 13:43:53.451: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-295 PodName:dns-295 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 13:43:53.451: INFO: >>> kubeConfig: /root/.kube/config I0701 13:43:53.490897 6 log.go:172] (0xc0024ae8f0) (0xc000d17360) Create stream I0701 13:43:53.490928 6 log.go:172] (0xc0024ae8f0) (0xc000d17360) Stream added, broadcasting: 1 I0701 13:43:53.492952 6 log.go:172] (0xc0024ae8f0) Reply frame received for 1 I0701 13:43:53.493058 6 log.go:172] (0xc0024ae8f0) (0xc0026d2000) Create stream I0701 13:43:53.493085 6 log.go:172] (0xc0024ae8f0) (0xc0026d2000) Stream added, broadcasting: 3 I0701 13:43:53.494162 6 log.go:172] (0xc0024ae8f0) Reply frame received for 3 I0701 13:43:53.494192 6 log.go:172] (0xc0024ae8f0) (0xc001110140) Create stream I0701 13:43:53.494212 6 log.go:172] (0xc0024ae8f0) (0xc001110140) Stream added, broadcasting: 5 I0701 13:43:53.494967 6 log.go:172] (0xc0024ae8f0) Reply frame received for 5 I0701 13:43:53.580665 6 log.go:172] (0xc0024ae8f0) Data frame received for 3 I0701 13:43:53.580700 6 log.go:172] (0xc0026d2000) (3) Data frame handling I0701 13:43:53.580729 6 log.go:172] (0xc0026d2000) (3) Data frame sent I0701 13:43:53.581875 6 log.go:172] (0xc0024ae8f0) Data frame received for 3 I0701 13:43:53.581897 6 log.go:172] (0xc0026d2000) (3) Data frame handling I0701 13:43:53.582047 6 log.go:172] (0xc0024ae8f0) Data frame received for 5 I0701 13:43:53.582070 6 log.go:172] (0xc001110140) (5) Data frame handling I0701 13:43:53.583677 6 log.go:172] (0xc0024ae8f0) Data frame received for 1 I0701 13:43:53.583707 6 log.go:172] (0xc000d17360) (1) Data frame handling I0701 13:43:53.583732 6 log.go:172] (0xc000d17360) (1) Data frame sent I0701 13:43:53.583748 6 log.go:172] (0xc0024ae8f0) (0xc000d17360) Stream removed, broadcasting: 1 I0701 13:43:53.583766 6 log.go:172] (0xc0024ae8f0) Go away received I0701 13:43:53.583893 6 log.go:172] (0xc0024ae8f0) (0xc000d17360) Stream removed, broadcasting: 1 I0701 13:43:53.583913 6 log.go:172] (0xc0024ae8f0) (0xc0026d2000) Stream removed, broadcasting: 3 I0701 13:43:53.583925 6 log.go:172] (0xc0024ae8f0) (0xc001110140) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jul 1 13:43:53.583: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-295 PodName:dns-295 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 13:43:53.583: INFO: >>> kubeConfig: /root/.kube/config I0701 13:43:53.764462 6 log.go:172] (0xc002a7e420) (0xc000e3f2c0) Create stream I0701 13:43:53.764491 6 log.go:172] (0xc002a7e420) (0xc000e3f2c0) Stream added, broadcasting: 1 I0701 13:43:53.766890 6 log.go:172] (0xc002a7e420) Reply frame received for 1 I0701 13:43:53.766934 6 log.go:172] (0xc002a7e420) (0xc000d17720) Create stream I0701 13:43:53.766950 6 log.go:172] (0xc002a7e420) (0xc000d17720) Stream added, broadcasting: 3 I0701 13:43:53.768071 6 log.go:172] (0xc002a7e420) Reply frame received for 3 I0701 13:43:53.768129 6 log.go:172] (0xc002a7e420) (0xc000d177c0) Create stream I0701 13:43:53.768149 6 log.go:172] (0xc002a7e420) (0xc000d177c0) Stream added, broadcasting: 5 I0701 13:43:53.769094 6 log.go:172] (0xc002a7e420) Reply frame received for 5 I0701 13:43:53.850680 6 log.go:172] (0xc002a7e420) Data frame received for 3 I0701 13:43:53.850713 6 log.go:172] (0xc000d17720) (3) Data frame handling I0701 13:43:53.850733 6 log.go:172] (0xc000d17720) (3) Data frame sent I0701 13:43:53.852711 6 log.go:172] (0xc002a7e420) Data frame received for 3 I0701 13:43:53.852771 6 log.go:172] (0xc000d17720) (3) Data frame handling I0701 13:43:53.852801 6 log.go:172] (0xc002a7e420) Data frame received for 5 I0701 13:43:53.852816 6 log.go:172] (0xc000d177c0) (5) Data frame handling I0701 13:43:53.854551 6 log.go:172] (0xc002a7e420) Data frame received for 1 I0701 13:43:53.854608 6 log.go:172] (0xc000e3f2c0) (1) Data frame handling I0701 13:43:53.854651 6 log.go:172] (0xc000e3f2c0) (1) Data frame sent I0701 13:43:53.854675 6 log.go:172] (0xc002a7e420) (0xc000e3f2c0) Stream removed, broadcasting: 1 I0701 13:43:53.854715 6 log.go:172] (0xc002a7e420) Go away received I0701 13:43:53.854985 6 log.go:172] (0xc002a7e420) (0xc000e3f2c0) Stream removed, broadcasting: 1 I0701 13:43:53.855013 6 log.go:172] (0xc002a7e420) (0xc000d17720) Stream removed, broadcasting: 3 I0701 13:43:53.855025 6 log.go:172] (0xc002a7e420) (0xc000d177c0) Stream removed, broadcasting: 5 Jul 1 13:43:53.855: INFO: Deleting pod dns-295... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:43:53.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-295" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":238,"skipped":3844,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:43:53.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:43:54.994: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:43:57.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207834, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207834, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207835, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207834, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:44:00.357: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:00.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7752" for this suite. STEP: Destroying namespace "webhook-7752-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.788 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":239,"skipped":3859,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:00.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-9f13eeb1-b7c9-4df9-9b30-3b0430263bc8 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-9f13eeb1-b7c9-4df9-9b30-3b0430263bc8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:06.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9499" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3860,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:06.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 1 13:44:11.519: INFO: Successfully updated pod "labelsupdate7d90d79d-4c60-46ae-a06e-75e0e429c774" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:13.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9111" for this suite. • [SLOW TEST:6.716 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:13.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:13.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6431" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":242,"skipped":3905,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:13.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 1 13:44:13.861: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:22.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9913" for this suite. • [SLOW TEST:8.441 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":243,"skipped":3907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:22.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jul 1 13:44:22.931: INFO: created pod pod-service-account-defaultsa Jul 1 13:44:22.931: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 1 13:44:23.014: INFO: created pod pod-service-account-mountsa Jul 1 13:44:23.014: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 1 13:44:23.037: INFO: created pod pod-service-account-nomountsa Jul 1 13:44:23.037: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 1 13:44:23.114: INFO: created pod pod-service-account-defaultsa-mountspec Jul 1 13:44:23.114: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 1 13:44:23.602: INFO: created pod pod-service-account-mountsa-mountspec Jul 1 13:44:23.602: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 1 13:44:23.647: INFO: created pod pod-service-account-nomountsa-mountspec Jul 1 13:44:23.647: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 1 13:44:23.920: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 1 13:44:23.920: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 1 13:44:23.990: INFO: created pod pod-service-account-mountsa-nomountspec Jul 1 13:44:23.990: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 1 13:44:24.228: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 1 13:44:24.229: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:24.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1082" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":244,"skipped":3933,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:25.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jul 1 13:44:26.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9396' Jul 1 13:44:28.189: INFO: stderr: "" Jul 1 13:44:28.189: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 13:44:28.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9396' Jul 1 13:44:28.404: INFO: stderr: "" Jul 1 13:44:28.404: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jul 1 13:44:33.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9396' Jul 1 13:44:33.524: INFO: stderr: "" Jul 1 13:44:33.524: INFO: stdout: "update-demo-nautilus-625m7 update-demo-nautilus-zscqw " Jul 1 13:44:33.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-625m7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9396' Jul 1 13:44:33.892: INFO: stderr: "" Jul 1 13:44:33.892: INFO: stdout: "" Jul 1 13:44:33.892: INFO: update-demo-nautilus-625m7 is created but not running Jul 1 13:44:38.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9396' Jul 1 13:44:39.193: INFO: stderr: "" Jul 1 13:44:39.193: INFO: stdout: "update-demo-nautilus-625m7 update-demo-nautilus-zscqw " Jul 1 13:44:39.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-625m7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9396' Jul 1 13:44:39.358: INFO: stderr: "" Jul 1 13:44:39.358: INFO: stdout: "true" Jul 1 13:44:39.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-625m7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9396' Jul 1 13:44:39.588: INFO: stderr: "" Jul 1 13:44:39.588: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:44:39.588: INFO: validating pod update-demo-nautilus-625m7 Jul 1 13:44:39.617: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:44:39.617: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:44:39.617: INFO: update-demo-nautilus-625m7 is verified up and running Jul 1 13:44:39.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zscqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9396' Jul 1 13:44:39.726: INFO: stderr: "" Jul 1 13:44:39.726: INFO: stdout: "true" Jul 1 13:44:39.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zscqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9396' Jul 1 13:44:39.942: INFO: stderr: "" Jul 1 13:44:39.942: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 13:44:39.942: INFO: validating pod update-demo-nautilus-zscqw Jul 1 13:44:39.947: INFO: got data: { "image": "nautilus.jpg" } Jul 1 13:44:39.947: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 13:44:39.947: INFO: update-demo-nautilus-zscqw is verified up and running STEP: using delete to clean up resources Jul 1 13:44:39.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9396' Jul 1 13:44:40.174: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 13:44:40.174: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 1 13:44:40.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9396' Jul 1 13:44:40.473: INFO: stderr: "No resources found in kubectl-9396 namespace.\n" Jul 1 13:44:40.473: INFO: stdout: "" Jul 1 13:44:40.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9396 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 13:44:40.829: INFO: stderr: "" Jul 1 13:44:40.829: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:40.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9396" for this suite. • [SLOW TEST:15.779 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":245,"skipped":3942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:41.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6445.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6445.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6445.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6445.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6445.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6445.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 13:44:50.585: INFO: DNS probes using dns-6445/dns-test-07824613-eca1-4e21-8ca3-8d8b29169db6 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:44:50.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6445" for this suite. • [SLOW TEST:9.491 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":246,"skipped":3978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:44:50.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:44:50.760: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 1 13:44:54.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4828 create -f -' Jul 1 13:45:02.229: INFO: stderr: "" Jul 1 13:45:02.229: INFO: stdout: "e2e-test-crd-publish-openapi-9865-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 1 13:45:02.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4828 delete e2e-test-crd-publish-openapi-9865-crds test-cr' Jul 1 13:45:02.373: INFO: stderr: "" Jul 1 13:45:02.373: INFO: stdout: "e2e-test-crd-publish-openapi-9865-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jul 1 13:45:02.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4828 apply -f -' Jul 1 13:45:03.807: INFO: stderr: "" Jul 1 13:45:03.807: INFO: stdout: "e2e-test-crd-publish-openapi-9865-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 1 13:45:03.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4828 delete e2e-test-crd-publish-openapi-9865-crds test-cr' Jul 1 13:45:03.954: INFO: stderr: "" Jul 1 13:45:03.954: INFO: stdout: "e2e-test-crd-publish-openapi-9865-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 1 13:45:03.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9865-crds' Jul 1 13:45:04.270: INFO: stderr: "" Jul 1 13:45:04.271: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9865-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:45:07.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4828" for this suite. • [SLOW TEST:16.453 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":247,"skipped":4023,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:45:07.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:45:07.828: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:45:10.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:45:12.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729207907, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:45:15.497: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:45:15.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2882-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:45:16.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-358" for this suite. STEP: Destroying namespace "webhook-358-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.668 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":248,"skipped":4031,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:45:16.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 1 13:45:23.464: INFO: Successfully updated pod "adopt-release-8j476" STEP: Checking that the Job readopts the Pod Jul 1 13:45:23.464: INFO: Waiting up to 15m0s for pod "adopt-release-8j476" in namespace "job-674" to be "adopted" Jul 1 13:45:23.469: INFO: Pod "adopt-release-8j476": Phase="Running", Reason="", readiness=true. Elapsed: 5.379797ms Jul 1 13:45:25.485: INFO: Pod "adopt-release-8j476": Phase="Running", Reason="", readiness=true. Elapsed: 2.020554945s Jul 1 13:45:25.485: INFO: Pod "adopt-release-8j476" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 1 13:45:25.994: INFO: Successfully updated pod "adopt-release-8j476" STEP: Checking that the Job releases the Pod Jul 1 13:45:25.994: INFO: Waiting up to 15m0s for pod "adopt-release-8j476" in namespace "job-674" to be "released" Jul 1 13:45:25.997: INFO: Pod "adopt-release-8j476": Phase="Running", Reason="", readiness=true. Elapsed: 3.133502ms Jul 1 13:45:28.002: INFO: Pod "adopt-release-8j476": Phase="Running", Reason="", readiness=true. Elapsed: 2.008416988s Jul 1 13:45:28.003: INFO: Pod "adopt-release-8j476" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:45:28.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-674" for this suite. • [SLOW TEST:11.167 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":249,"skipped":4043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:45:28.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-18a77cd7-d6e0-42b7-8907-7036b641c30d STEP: Creating a pod to test consume configMaps Jul 1 13:45:28.568: INFO: Waiting up to 5m0s for pod "pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa" in namespace "configmap-3340" to be "success or failure" Jul 1 13:45:28.716: INFO: Pod "pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa": Phase="Pending", Reason="", readiness=false. Elapsed: 148.390753ms Jul 1 13:45:30.721: INFO: Pod "pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152584664s Jul 1 13:45:32.725: INFO: Pod "pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156834039s STEP: Saw pod success Jul 1 13:45:32.725: INFO: Pod "pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa" satisfied condition "success or failure" Jul 1 13:45:32.728: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa container configmap-volume-test: STEP: delete the pod Jul 1 13:45:32.841: INFO: Waiting for pod pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa to disappear Jul 1 13:45:32.847: INFO: Pod pod-configmaps-fbfb0e78-252d-4356-9de5-3543cee100fa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:45:32.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3340" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4072,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:45:32.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:45:33.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9" in namespace "projected-1488" to be "success or failure" Jul 1 13:45:33.303: INFO: Pod "downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 149.840599ms Jul 1 13:45:35.472: INFO: Pod "downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318285725s Jul 1 13:45:37.474: INFO: Pod "downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321123576s Jul 1 13:45:39.478: INFO: Pod "downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.325006795s STEP: Saw pod success Jul 1 13:45:39.478: INFO: Pod "downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9" satisfied condition "success or failure" Jul 1 13:45:39.481: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9 container client-container: STEP: delete the pod Jul 1 13:45:39.502: INFO: Waiting for pod downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9 to disappear Jul 1 13:45:39.525: INFO: Pod downwardapi-volume-ff9ec2d1-4865-4061-9d1d-110e659e2dc9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:45:39.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1488" for this suite. • [SLOW TEST:6.672 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4080,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:45:39.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 13:45:47.644: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 13:45:47.656: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 13:45:49.656: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 13:45:49.661: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 13:45:51.656: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 13:45:51.661: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 13:45:53.656: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 13:45:53.661: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 13:45:55.656: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 13:45:55.661: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 13:45:57.656: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 13:45:57.660: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 13:45:59.656: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 13:45:59.661: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:45:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-859" for this suite. • [SLOW TEST:20.144 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4102,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:45:59.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8472f2fb-46b1-49ec-9905-37a4753a8caa STEP: Creating configMap with name cm-test-opt-upd-afb9fd17-809f-4f28-89ce-680eefd57700 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8472f2fb-46b1-49ec-9905-37a4753a8caa STEP: Updating configmap cm-test-opt-upd-afb9fd17-809f-4f28-89ce-680eefd57700 STEP: Creating configMap with name cm-test-opt-create-ce9c2ad1-dd75-4a4e-ac1d-5d58e1dfe6a3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:47:16.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6561" for this suite. • [SLOW TEST:77.078 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:47:16.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c6b5e3b6-4217-4919-b6c5-40c7ad4b94e7 STEP: Creating a pod to test consume secrets Jul 1 13:47:17.012: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca" in namespace "projected-8287" to be "success or failure" Jul 1 13:47:17.039: INFO: Pod "pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca": Phase="Pending", Reason="", readiness=false. Elapsed: 27.291533ms Jul 1 13:47:19.043: INFO: Pod "pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031446816s Jul 1 13:47:21.048: INFO: Pod "pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca": Phase="Running", Reason="", readiness=true. Elapsed: 4.035968127s Jul 1 13:47:23.052: INFO: Pod "pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040387418s STEP: Saw pod success Jul 1 13:47:23.052: INFO: Pod "pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca" satisfied condition "success or failure" Jul 1 13:47:23.055: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca container projected-secret-volume-test: STEP: delete the pod Jul 1 13:47:23.222: INFO: Waiting for pod pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca to disappear Jul 1 13:47:23.263: INFO: Pod pod-projected-secrets-b5d8fbb5-6b13-4f66-a5aa-aaa9a8bb1cca no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:47:23.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8287" for this suite. • [SLOW TEST:6.515 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:47:23.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:47:23.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3139" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4252,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:47:23.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:47:23.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jul 1 13:47:24.493: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T13:47:24Z generation:1 name:name1 resourceVersion:28796970 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f0d80f71-b235-4a93-9dc4-83e3d3c80c29] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jul 1 13:47:34.499: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T13:47:34Z generation:1 name:name2 resourceVersion:28797019 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d2b3e3ba-cc15-42dd-bfa6-7af674231358] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jul 1 13:47:44.505: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T13:47:24Z generation:2 name:name1 resourceVersion:28797049 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f0d80f71-b235-4a93-9dc4-83e3d3c80c29] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jul 1 13:47:54.512: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T13:47:34Z generation:2 name:name2 resourceVersion:28797080 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d2b3e3ba-cc15-42dd-bfa6-7af674231358] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jul 1 13:48:04.521: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T13:47:24Z generation:2 name:name1 resourceVersion:28797110 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f0d80f71-b235-4a93-9dc4-83e3d3c80c29] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jul 1 13:48:14.531: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T13:47:34Z generation:2 name:name2 resourceVersion:28797140 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d2b3e3ba-cc15-42dd-bfa6-7af674231358] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:48:25.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7346" for this suite. • [SLOW TEST:61.269 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":256,"skipped":4262,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:48:25.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:48:25.122: INFO: Creating deployment "webserver-deployment" Jul 1 13:48:25.139: INFO: Waiting for observed generation 1 Jul 1 13:48:27.162: INFO: Waiting for all required pods to come up Jul 1 13:48:27.167: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 1 13:48:39.175: INFO: Waiting for deployment "webserver-deployment" to complete Jul 1 13:48:39.184: INFO: Updating deployment "webserver-deployment" with a non-existent image Jul 1 13:48:39.190: INFO: Updating deployment webserver-deployment Jul 1 13:48:39.190: INFO: Waiting for observed generation 2 Jul 1 13:48:41.467: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 1 13:48:41.471: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 1 13:48:41.474: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 1 13:48:41.481: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 1 13:48:41.481: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 1 13:48:41.483: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 1 13:48:41.488: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jul 1 13:48:41.488: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jul 1 13:48:41.493: INFO: Updating deployment webserver-deployment Jul 1 13:48:41.494: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jul 1 13:48:41.684: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 1 13:48:41.828: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 1 13:48:45.308: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4719 /apis/apps/v1/namespaces/deployment-4719/deployments/webserver-deployment 29a83997-2886-4388-9d00-48aa9842b8bd 28797485 3 2020-07-01 13:48:25 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004027d48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-01 13:48:41 +0000 UTC,LastTransitionTime:2020-07-01 13:48:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-07-01 13:48:43 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jul 1 13:48:45.474: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-4719 /apis/apps/v1/namespaces/deployment-4719/replicasets/webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 28797471 3 2020-07-01 13:48:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 29a83997-2886-4388-9d00-48aa9842b8bd 0xc00429fb37 0xc00429fb38}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00429fba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 13:48:45.474: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jul 1 13:48:45.475: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-4719 /apis/apps/v1/namespaces/deployment-4719/replicasets/webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 28797483 3 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 29a83997-2886-4388-9d00-48aa9842b8bd 0xc00429fa77 0xc00429fa78}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00429fad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jul 1 13:48:45.534: INFO: Pod "webserver-deployment-595b5b9587-42wlp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-42wlp webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-42wlp 4ee747d3-2d6b-4065-aa95-d54bd3560f0b 28797293 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f40cd7 0xc003f40cd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.92,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dc01a8d7c428b9d0fef0f0bc40bd311010a0ee6f9c69245ae0150869b296f7f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.534: INFO: Pod "webserver-deployment-595b5b9587-67ckz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-67ckz webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-67ckz 7a66c0d7-338e-438b-a9ac-c89cb3311174 28797496 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f40e57 0xc003f40e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-01 13:48:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.534: INFO: Pod "webserver-deployment-595b5b9587-8m8l4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8m8l4 webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-8m8l4 c64716a4-7e11-4887-ba40-f56f27aacfbd 28797492 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f40fb7 0xc003f40fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-01 13:48:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.535: INFO: Pod "webserver-deployment-595b5b9587-8ppxd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8ppxd webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-8ppxd ac8d3bcc-196d-4d1e-830d-ff782ddf3d28 28797464 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41117 0xc003f41118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-01 13:48:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.535: INFO: Pod "webserver-deployment-595b5b9587-8w86v" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8w86v webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-8w86v 405ad9d1-0f95-4897-9844-6a65edb2f377 28797451 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41277 0xc003f41278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.535: INFO: Pod "webserver-deployment-595b5b9587-9kc8v" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9kc8v webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-9kc8v e2ee08c9-3c27-4531-943f-77e2577b3399 28797487 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41397 0xc003f41398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-01 13:48:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.535: INFO: Pod "webserver-deployment-595b5b9587-dmkqd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dmkqd webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-dmkqd f8c0f435-17e5-409b-9913-98a5bf8e2976 28797308 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f414f7 0xc003f414f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.125,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://db0eb97e8dc9ebb52cd3cf9f5b147112b2774021e2e98539d98c668e0642bec1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.535: INFO: Pod "webserver-deployment-595b5b9587-dqvbt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dqvbt webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-dqvbt cc585898-0f1a-471d-ae6f-6018c49fc48c 28797480 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41677 0xc003f41678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-01 13:48:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.536: INFO: Pod "webserver-deployment-595b5b9587-gv498" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gv498 webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-gv498 3afee7cc-2811-483a-b8e8-06d3d2255771 28797325 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f417d7 0xc003f417d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.95,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://67ebd596e2681cf231ebaa471c39c1a5584696d40ef9b9d8197ff1dd6e70b018,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.536: INFO: Pod "webserver-deployment-595b5b9587-kw7jt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kw7jt webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-kw7jt 1f552146-5385-4de3-845e-ad9d81b5ce62 28797276 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41957 0xc003f41958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.124,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d10d5313cd1979bd03ae4c8e14fc2d3c59f2ae00aae20c80394c58a60fe3e1ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.536: INFO: Pod "webserver-deployment-595b5b9587-lj5nb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lj5nb webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-lj5nb 56a38cda-dd9f-44d7-8d64-0cb127d80a15 28797301 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41ad7 0xc003f41ad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.126,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://70e09599babeb57d47b752e16cf730276d471988a6187c8d70a076509210d041,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.536: INFO: Pod "webserver-deployment-595b5b9587-mphxk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mphxk webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-mphxk dd9d51bf-13d1-49ff-b642-ce17ec58f829 28797452 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41c57 0xc003f41c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.536: INFO: Pod "webserver-deployment-595b5b9587-nsc2k" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nsc2k webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-nsc2k b5b189da-132d-4bda-9fec-14e742a9f73d 28797449 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41d77 0xc003f41d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.536: INFO: Pod "webserver-deployment-595b5b9587-pj8x9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pj8x9 webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-pj8x9 5da910d6-686c-4d9d-9674-4478eae349dc 28797309 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc003f41e97 0xc003f41e98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.93,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a5ceb523d292868e263c3178449e04a5c855d890a55135f8075ec7bd05358f60,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.537: INFO: Pod "webserver-deployment-595b5b9587-rw7zj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rw7zj webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-rw7zj 4371a019-067b-4be7-9a92-a82d3d224959 28797328 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc0024aa017 0xc0024aa018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.94,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8c36cf5219ecbb794104d22bb8893a98df410e6b8643e930e19bdea602302baa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.537: INFO: Pod "webserver-deployment-595b5b9587-s9cw4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s9cw4 webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-s9cw4 0e4c6d4f-f450-4a81-828e-a5d76d0eff6f 28797264 0 2020-07-01 13:48:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc0024aa197 0xc0024aa198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.91,StartTime:2020-07-01 13:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:48:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f9690c93c29919fccc80a8820ca6573cf0a7c486179f3965c8e03412283fc075,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.537: INFO: Pod "webserver-deployment-595b5b9587-tx8jx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tx8jx webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-tx8jx 8f917c08-365d-448c-840b-c394e99df287 28797440 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc0024aa317 0xc0024aa318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.537: INFO: Pod "webserver-deployment-595b5b9587-w4cxk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w4cxk webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-w4cxk a2f79a5a-4139-47bd-9936-b62f0741ccf0 28797470 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc0024aa447 0xc0024aa448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-01 13:48:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.537: INFO: Pod "webserver-deployment-595b5b9587-wm45f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wm45f webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-wm45f 5d870a14-2262-4536-8457-067e13bd5d00 28797455 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc0024aa5a7 0xc0024aa5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-595b5b9587-wt7jk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wt7jk webserver-deployment-595b5b9587- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-595b5b9587-wt7jk 00f88b06-0193-40cf-8c52-12ccca6832d3 28797504 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e22c1f21-6d16-466a-b66f-4debd66ca1d4 0xc0024aa6c7 0xc0024aa6c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-01 13:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-c7997dcc8-7gh72" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7gh72 webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-7gh72 f3fe9138-2780-4f49-9ea2-9bc3d244f82a 28797454 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024aa827 0xc0024aa828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-c7997dcc8-7mgdt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7mgdt webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-7mgdt dbd568b7-125c-4ddf-9043-8626354e3893 28797381 0 2020-07-01 13:48:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024aa957 0xc0024aa958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-01 13:48:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-c7997dcc8-8rg9q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8rg9q webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-8rg9q f926f9d5-49b9-498b-a6b9-53dbcb9ce8d6 28797389 0 2020-07-01 13:48:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024aaad7 0xc0024aaad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-01 13:48:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-c7997dcc8-8xxbb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8xxbb webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-8xxbb 2dc0a865-fc84-4569-8fa0-97ee10ad82bc 28797463 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024aac57 0xc0024aac58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-c7997dcc8-9fmsn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9fmsn webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-9fmsn 0f4afff2-810f-447d-9d07-0bccdd9cb06c 28797456 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024aad87 0xc0024aad88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-c7997dcc8-fc6t4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fc6t4 webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-fc6t4 44792727-5dfd-4d71-b93a-904932e05557 28797450 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024aaeb7 0xc0024aaeb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.538: INFO: Pod "webserver-deployment-c7997dcc8-fsq7t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fsq7t webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-fsq7t 0a1de679-988a-4277-abd4-508232c5a703 28797499 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024aaff7 0xc0024aaff8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-01 13:48:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.539: INFO: Pod "webserver-deployment-c7997dcc8-l4rd2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l4rd2 webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-l4rd2 7b7059df-c802-4f85-b7bc-b006d418eeaf 28797350 0 2020-07-01 13:48:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024ab177 0xc0024ab178}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-07-01 13:48:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.539: INFO: Pod "webserver-deployment-c7997dcc8-l6n8l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l6n8l webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-l6n8l b6ffeec6-6049-49ea-b8dd-8a9754ed96b3 28797503 0 2020-07-01 13:48:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024ab2f7 0xc0024ab2f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.129,StartTime:2020-07-01 13:48:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.539: INFO: Pod "webserver-deployment-c7997dcc8-mmsnx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mmsnx webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-mmsnx 055c8585-8080-4a56-9639-f50bb61b9fc1 28797437 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024ab4a7 0xc0024ab4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.539: INFO: Pod "webserver-deployment-c7997dcc8-v6h5q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v6h5q webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-v6h5q 5823cae4-652d-4ad3-a32a-99788d241cc9 28797453 0 2020-07-01 13:48:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024ab5d7 0xc0024ab5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.539: INFO: Pod "webserver-deployment-c7997dcc8-vf2cg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vf2cg webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-vf2cg a62a6b79-1696-41fb-81a4-c6a08b8ccbb9 28797366 0 2020-07-01 13:48:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024ab707 0xc0024ab708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-01 13:48:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 1 13:48:45.539: INFO: Pod "webserver-deployment-c7997dcc8-xslks" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xslks webserver-deployment-c7997dcc8- deployment-4719 /api/v1/namespaces/deployment-4719/pods/webserver-deployment-c7997dcc8-xslks b8c8e1e7-29c5-4257-ba48-dface09dc47b 28797484 0 2020-07-01 13:48:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 092b81d6-4ffb-4486-8317-462005251f65 0xc0024ab887 0xc0024ab888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r8cd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:48:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-07-01 13:48:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:48:45.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4719" for this suite. • [SLOW TEST:21.424 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":257,"skipped":4265,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:48:46.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-mvgl STEP: Creating a pod to test atomic-volume-subpath Jul 1 13:48:49.141: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mvgl" in namespace "subpath-9257" to be "success or failure" Jul 1 13:48:49.434: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 292.804335ms Jul 1 13:48:51.941: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.799638063s Jul 1 13:48:54.360: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 5.218559652s Jul 1 13:48:56.882: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 7.741133377s Jul 1 13:48:59.174: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033134583s Jul 1 13:49:01.407: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.265620999s Jul 1 13:49:03.597: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 14.455332387s Jul 1 13:49:05.990: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Pending", Reason="", readiness=false. Elapsed: 16.848368344s Jul 1 13:49:08.038: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 18.896723502s Jul 1 13:49:10.275: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 21.133884131s Jul 1 13:49:12.460: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 23.318749712s Jul 1 13:49:14.503: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 25.361962734s Jul 1 13:49:16.858: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 27.716615629s Jul 1 13:49:18.953: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 29.811726456s Jul 1 13:49:20.957: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 31.815570837s Jul 1 13:49:22.961: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 33.819900882s Jul 1 13:49:24.965: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Running", Reason="", readiness=true. Elapsed: 35.82419711s Jul 1 13:49:26.970: INFO: Pod "pod-subpath-test-configmap-mvgl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.828786786s STEP: Saw pod success Jul 1 13:49:26.970: INFO: Pod "pod-subpath-test-configmap-mvgl" satisfied condition "success or failure" Jul 1 13:49:26.973: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-mvgl container test-container-subpath-configmap-mvgl: STEP: delete the pod Jul 1 13:49:27.045: INFO: Waiting for pod pod-subpath-test-configmap-mvgl to disappear Jul 1 13:49:27.064: INFO: Pod pod-subpath-test-configmap-mvgl no longer exists STEP: Deleting pod pod-subpath-test-configmap-mvgl Jul 1 13:49:27.064: INFO: Deleting pod "pod-subpath-test-configmap-mvgl" in namespace "subpath-9257" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:49:27.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9257" for this suite. • [SLOW TEST:40.600 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":258,"skipped":4267,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:49:27.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 1 13:49:27.229: INFO: Waiting up to 5m0s for pod "pod-72c9d509-3ab8-4316-a41b-beb684446b66" in namespace "emptydir-6666" to be "success or failure" Jul 1 13:49:27.232: INFO: Pod "pod-72c9d509-3ab8-4316-a41b-beb684446b66": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504876ms Jul 1 13:49:29.300: INFO: Pod "pod-72c9d509-3ab8-4316-a41b-beb684446b66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071547319s Jul 1 13:49:31.305: INFO: Pod "pod-72c9d509-3ab8-4316-a41b-beb684446b66": Phase="Running", Reason="", readiness=true. Elapsed: 4.076077085s Jul 1 13:49:33.309: INFO: Pod "pod-72c9d509-3ab8-4316-a41b-beb684446b66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080161604s STEP: Saw pod success Jul 1 13:49:33.309: INFO: Pod "pod-72c9d509-3ab8-4316-a41b-beb684446b66" satisfied condition "success or failure" Jul 1 13:49:33.312: INFO: Trying to get logs from node jerma-worker2 pod pod-72c9d509-3ab8-4316-a41b-beb684446b66 container test-container: STEP: delete the pod Jul 1 13:49:33.392: INFO: Waiting for pod pod-72c9d509-3ab8-4316-a41b-beb684446b66 to disappear Jul 1 13:49:33.400: INFO: Pod pod-72c9d509-3ab8-4316-a41b-beb684446b66 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:49:33.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6666" for this suite. • [SLOW TEST:6.337 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4268,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:49:33.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7617.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7617.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7617.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 13:49:39.536: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.540: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.543: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.547: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.558: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.561: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.563: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.566: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:39.570: INFO: Lookups using dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local] Jul 1 13:49:44.575: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.579: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.583: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.586: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.595: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.599: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.604: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.607: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:44.612: INFO: Lookups using dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local] Jul 1 13:49:49.575: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.578: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.581: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.584: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.591: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.593: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.596: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.598: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:49.603: INFO: Lookups using dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local] Jul 1 13:49:54.575: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.578: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.581: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.584: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.593: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.596: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.598: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.601: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:54.607: INFO: Lookups using dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local] Jul 1 13:49:59.575: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.579: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.583: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.586: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.598: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.601: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.604: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.607: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:49:59.613: INFO: Lookups using dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local] Jul 1 13:50:04.575: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.578: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.582: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.586: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.617: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.620: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.624: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.627: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local from pod dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b: the server could not find the requested resource (get pods dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b) Jul 1 13:50:04.634: INFO: Lookups using dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7617.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7617.svc.cluster.local jessie_udp@dns-test-service-2.dns-7617.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7617.svc.cluster.local] Jul 1 13:50:09.636: INFO: DNS probes using dns-7617/dns-test-fcf1b17d-d20e-46e4-ade9-e4a00e5c3f3b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:50:10.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7617" for this suite. • [SLOW TEST:37.063 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":260,"skipped":4287,"failed":0} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:50:10.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:50:11.048: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 1 13:50:11.096: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 1 13:50:16.099: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 13:50:16.099: INFO: Creating deployment "test-rolling-update-deployment" Jul 1 13:50:16.127: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 1 13:50:16.221: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 1 13:50:18.229: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 1 13:50:18.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208216, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208216, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208216, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208216, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:50:20.235: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 1 13:50:20.246: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2301 /apis/apps/v1/namespaces/deployment-2301/deployments/test-rolling-update-deployment ed99f353-20cb-4262-9824-19d7025f3277 28798222 1 2020-07-01 13:50:16 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004026e68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-01 13:50:16 +0000 UTC,LastTransitionTime:2020-07-01 13:50:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-07-01 13:50:19 +0000 UTC,LastTransitionTime:2020-07-01 13:50:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 1 13:50:20.248: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2301 /apis/apps/v1/namespaces/deployment-2301/replicasets/test-rolling-update-deployment-67cf4f6444 15d7c131-0b99-46c6-8cc8-c6cb3427c199 28798211 1 2020-07-01 13:50:16 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ed99f353-20cb-4262-9824-19d7025f3277 0xc004027307 0xc004027308}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004027378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 1 13:50:20.248: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 1 13:50:20.249: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2301 /apis/apps/v1/namespaces/deployment-2301/replicasets/test-rolling-update-controller 110b5be3-08fb-4194-aa99-d61f0ff7f88a 28798221 2 2020-07-01 13:50:11 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ed99f353-20cb-4262-9824-19d7025f3277 0xc004027237 0xc004027238}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004027298 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 13:50:20.252: INFO: Pod "test-rolling-update-deployment-67cf4f6444-lzwrx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-lzwrx test-rolling-update-deployment-67cf4f6444- deployment-2301 /api/v1/namespaces/deployment-2301/pods/test-rolling-update-deployment-67cf4f6444-lzwrx f5327258-0a93-4ff3-ae0e-656b5a88aaef 28798210 0 2020-07-01 13:50:16 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 15d7c131-0b99-46c6-8cc8-c6cb3427c199 0xc003ff9677 0xc003ff9678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-97gbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-97gbg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-97gbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:50:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:50:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:50:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 13:50:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.110,StartTime:2020-07-01 13:50:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 13:50:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0c2f08f579479aa549a855609cf3cd5483b5848196090fc4fc2bdc63da61c407,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.110,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:50:20.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2301" for this suite. • [SLOW TEST:9.782 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":261,"skipped":4288,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:50:20.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jul 1 13:50:20.502: INFO: Waiting up to 5m0s for pod "var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e" in namespace "var-expansion-1106" to be "success or failure" Jul 1 13:50:20.539: INFO: Pod "var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e": Phase="Pending", Reason="", readiness=false. Elapsed: 37.125948ms Jul 1 13:50:22.543: INFO: Pod "var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041329578s Jul 1 13:50:24.564: INFO: Pod "var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062506445s STEP: Saw pod success Jul 1 13:50:24.564: INFO: Pod "var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e" satisfied condition "success or failure" Jul 1 13:50:24.567: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e container dapi-container: STEP: delete the pod Jul 1 13:50:24.627: INFO: Waiting for pod var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e to disappear Jul 1 13:50:24.648: INFO: Pod var-expansion-c474508d-2cc1-4d4a-9ab4-d243a1dbda1e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:50:24.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1106" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4294,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:50:25.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 1 13:50:26.002: INFO: Waiting up to 5m0s for pod "downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6" in namespace "downward-api-902" to be "success or failure" Jul 1 13:50:26.084: INFO: Pod "downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 81.819267ms Jul 1 13:50:28.138: INFO: Pod "downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135362586s Jul 1 13:50:30.142: INFO: Pod "downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139396021s Jul 1 13:50:32.315: INFO: Pod "downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.312758603s STEP: Saw pod success Jul 1 13:50:32.315: INFO: Pod "downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6" satisfied condition "success or failure" Jul 1 13:50:32.319: INFO: Trying to get logs from node jerma-worker2 pod downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6 container dapi-container: STEP: delete the pod Jul 1 13:50:32.567: INFO: Waiting for pod downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6 to disappear Jul 1 13:50:32.600: INFO: Pod downward-api-43e2dde6-0fdc-44b8-a102-829ad1afb1f6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:50:32.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-902" for this suite. • [SLOW TEST:7.535 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4312,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:50:32.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 13:50:32.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7426' Jul 1 13:50:32.923: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 13:50:32.923: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jul 1 13:50:32.938: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 1 13:50:32.965: INFO: scanned /root for discovery docs: Jul 1 13:50:32.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7426' Jul 1 13:50:49.835: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 1 13:50:49.835: INFO: stdout: "Created e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865\nScaling up e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jul 1 13:50:49.835: INFO: stdout: "Created e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865\nScaling up e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jul 1 13:50:49.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7426' Jul 1 13:50:49.937: INFO: stderr: "" Jul 1 13:50:49.937: INFO: stdout: "e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865-zpc9x " Jul 1 13:50:49.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865-zpc9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7426' Jul 1 13:50:50.043: INFO: stderr: "" Jul 1 13:50:50.044: INFO: stdout: "true" Jul 1 13:50:50.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865-zpc9x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7426' Jul 1 13:50:50.143: INFO: stderr: "" Jul 1 13:50:50.143: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jul 1 13:50:50.143: INFO: e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865-zpc9x is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Jul 1 13:50:50.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7426' Jul 1 13:50:50.251: INFO: stderr: "" Jul 1 13:50:50.251: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:50:50.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7426" for this suite. • [SLOW TEST:17.698 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":264,"skipped":4331,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:50:50.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 1 13:50:50.386: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 13:50:50.512: INFO: Waiting for terminating namespaces to be deleted... Jul 1 13:50:50.515: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 1 13:50:50.521: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:50:50.521: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:50:50.521: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:50:50.521: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 13:50:50.521: INFO: e2e-test-httpd-rc-bdfb7a2c35ee775447ce1ba430bc3865-zpc9x from kubectl-7426 started at 2020-07-01 13:50:33 +0000 UTC (1 container statuses recorded) Jul 1 13:50:50.521: INFO: Container e2e-test-httpd-rc ready: true, restart count 0 Jul 1 13:50:50.521: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 1 13:50:50.531: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:50:50.531: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 13:50:50.531: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Jul 1 13:50:50.531: INFO: Container kube-hunter ready: false, restart count 0 Jul 1 13:50:50.531: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Jul 1 13:50:50.531: INFO: Container kindnet-cni ready: true, restart count 3 Jul 1 13:50:50.531: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Jul 1 13:50:50.531: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b27d58df-297c-49a7-bb43-469be2a0e4da 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b27d58df-297c-49a7-bb43-469be2a0e4da off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b27d58df-297c-49a7-bb43-469be2a0e4da [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:50:58.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5335" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.439 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":265,"skipped":4350,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:50:58.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:50:59.259: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:51:01.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208259, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208259, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208259, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208259, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:51:04.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:51:16.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6106" for this suite. STEP: Destroying namespace "webhook-6106-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.131 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":266,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:51:16.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:51:17.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514" in namespace "downward-api-2433" to be "success or failure" Jul 1 13:51:17.045: INFO: Pod "downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514": Phase="Pending", Reason="", readiness=false. Elapsed: 9.939625ms Jul 1 13:51:19.099: INFO: Pod "downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063742134s Jul 1 13:51:21.152: INFO: Pod "downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117322892s STEP: Saw pod success Jul 1 13:51:21.152: INFO: Pod "downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514" satisfied condition "success or failure" Jul 1 13:51:21.155: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514 container client-container: STEP: delete the pod Jul 1 13:51:21.208: INFO: Waiting for pod downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514 to disappear Jul 1 13:51:21.385: INFO: Pod downwardapi-volume-f35d21bc-a0e7-4c71-b31c-8c961375e514 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:51:21.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2433" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4383,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:51:21.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 1 13:51:22.091: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6418 /api/v1/namespaces/watch-6418/configmaps/e2e-watch-test-watch-closed 699c74cd-aa51-4825-9bb8-468b47e7c3df 28798676 0 2020-07-01 13:51:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 13:51:22.091: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6418 /api/v1/namespaces/watch-6418/configmaps/e2e-watch-test-watch-closed 699c74cd-aa51-4825-9bb8-468b47e7c3df 28798677 0 2020-07-01 13:51:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 1 13:51:22.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6418 /api/v1/namespaces/watch-6418/configmaps/e2e-watch-test-watch-closed 699c74cd-aa51-4825-9bb8-468b47e7c3df 28798679 0 2020-07-01 13:51:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 13:51:22.128: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6418 /api/v1/namespaces/watch-6418/configmaps/e2e-watch-test-watch-closed 699c74cd-aa51-4825-9bb8-468b47e7c3df 28798681 0 2020-07-01 13:51:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:51:22.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6418" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":268,"skipped":4399,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:51:22.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 1 13:51:22.263: INFO: Waiting up to 5m0s for pod "pod-f0f3d76e-70fb-4ff9-9184-9527270e3875" in namespace "emptydir-5178" to be "success or failure" Jul 1 13:51:22.272: INFO: Pod "pod-f0f3d76e-70fb-4ff9-9184-9527270e3875": Phase="Pending", Reason="", readiness=false. Elapsed: 9.319819ms Jul 1 13:51:24.276: INFO: Pod "pod-f0f3d76e-70fb-4ff9-9184-9527270e3875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013634678s Jul 1 13:51:26.281: INFO: Pod "pod-f0f3d76e-70fb-4ff9-9184-9527270e3875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017797299s STEP: Saw pod success Jul 1 13:51:26.281: INFO: Pod "pod-f0f3d76e-70fb-4ff9-9184-9527270e3875" satisfied condition "success or failure" Jul 1 13:51:26.284: INFO: Trying to get logs from node jerma-worker2 pod pod-f0f3d76e-70fb-4ff9-9184-9527270e3875 container test-container: STEP: delete the pod Jul 1 13:51:26.358: INFO: Waiting for pod pod-f0f3d76e-70fb-4ff9-9184-9527270e3875 to disappear Jul 1 13:51:26.372: INFO: Pod pod-f0f3d76e-70fb-4ff9-9184-9527270e3875 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:51:26.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5178" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:51:26.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:51:26.486: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ef11a807-1643-4e69-b103-274b15d30049" in namespace "security-context-test-2320" to be "success or failure" Jul 1 13:51:26.510: INFO: Pod "busybox-readonly-false-ef11a807-1643-4e69-b103-274b15d30049": Phase="Pending", Reason="", readiness=false. Elapsed: 24.918336ms Jul 1 13:51:28.516: INFO: Pod "busybox-readonly-false-ef11a807-1643-4e69-b103-274b15d30049": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030293442s Jul 1 13:51:30.520: INFO: Pod "busybox-readonly-false-ef11a807-1643-4e69-b103-274b15d30049": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034771703s Jul 1 13:51:30.520: INFO: Pod "busybox-readonly-false-ef11a807-1643-4e69-b103-274b15d30049" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:51:30.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2320" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4423,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:51:30.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 1 13:51:38.650: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 13:51:38.680: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 13:51:40.680: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 13:51:40.685: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 13:51:42.680: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 13:51:42.686: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 13:51:44.680: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 13:51:44.684: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 13:51:46.680: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 13:51:46.684: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 13:51:48.680: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 13:51:48.685: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 13:51:50.680: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 13:51:50.685: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:51:50.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6223" for this suite. • [SLOW TEST:20.166 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:51:50.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-31d3395d-2808-4d87-9d0c-091163f4bd39 STEP: Creating a pod to test consume configMaps Jul 1 13:51:50.833: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954" in namespace "projected-8580" to be "success or failure" Jul 1 13:51:50.842: INFO: Pod "pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516496ms Jul 1 13:51:52.846: INFO: Pod "pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012371583s Jul 1 13:51:54.849: INFO: Pod "pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954": Phase="Running", Reason="", readiness=true. Elapsed: 4.015921378s Jul 1 13:51:57.003: INFO: Pod "pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169393106s STEP: Saw pod success Jul 1 13:51:57.003: INFO: Pod "pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954" satisfied condition "success or failure" Jul 1 13:51:57.005: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954 container projected-configmap-volume-test: STEP: delete the pod Jul 1 13:51:57.162: INFO: Waiting for pod pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954 to disappear Jul 1 13:51:57.176: INFO: Pod pod-projected-configmaps-fcc4f047-d786-484d-8f7b-fdab8c074954 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:51:57.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8580" for this suite. • [SLOW TEST:6.486 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4461,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:51:57.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:51:57.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9" in namespace "projected-1931" to be "success or failure" Jul 1 13:51:57.413: INFO: Pod "downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.715889ms Jul 1 13:51:59.417: INFO: Pod "downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035244213s Jul 1 13:52:01.420: INFO: Pod "downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038754876s STEP: Saw pod success Jul 1 13:52:01.420: INFO: Pod "downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9" satisfied condition "success or failure" Jul 1 13:52:01.422: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9 container client-container: STEP: delete the pod Jul 1 13:52:01.621: INFO: Waiting for pod downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9 to disappear Jul 1 13:52:01.625: INFO: Pod downwardapi-volume-9f7bd546-e607-4243-8f71-cb775d0df7d9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:52:01.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1931" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4470,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:52:01.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:52:02.033: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 10.398608ms) Jul 1 13:52:02.114: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 80.414175ms) Jul 1 13:52:02.124: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 10.303152ms) Jul 1 13:52:02.128: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.446764ms) Jul 1 13:52:02.131: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.919674ms) Jul 1 13:52:02.134: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.809967ms) Jul 1 13:52:02.136: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.484127ms) Jul 1 13:52:02.139: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.62315ms) Jul 1 13:52:02.141: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.662116ms) Jul 1 13:52:02.144: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.458768ms) Jul 1 13:52:02.147: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.668058ms) Jul 1 13:52:02.149: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.732959ms) Jul 1 13:52:02.152: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.633288ms) Jul 1 13:52:02.155: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.007482ms) Jul 1 13:52:02.158: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.500928ms) Jul 1 13:52:02.160: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.836721ms) Jul 1 13:52:02.164: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.406423ms) Jul 1 13:52:02.167: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.363113ms) Jul 1 13:52:02.170: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.010062ms) Jul 1 13:52:02.174: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.263519ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:52:02.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5908" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":274,"skipped":4479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:52:02.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 1 13:52:02.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc" in namespace "projected-6907" to be "success or failure" Jul 1 13:52:02.430: INFO: Pod "downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc": Phase="Pending", Reason="", readiness=false. Elapsed: 91.673266ms Jul 1 13:52:04.446: INFO: Pod "downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107689576s Jul 1 13:52:06.451: INFO: Pod "downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11224562s STEP: Saw pod success Jul 1 13:52:06.451: INFO: Pod "downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc" satisfied condition "success or failure" Jul 1 13:52:06.455: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc container client-container: STEP: delete the pod Jul 1 13:52:06.489: INFO: Waiting for pod downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc to disappear Jul 1 13:52:06.501: INFO: Pod downwardapi-volume-4e21e0c0-da0f-421b-b65a-ae6b2a3f46fc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:52:06.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6907" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4530,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:52:06.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8366, will wait for the garbage collector to delete the pods Jul 1 13:52:12.676: INFO: Deleting Job.batch foo took: 7.02271ms Jul 1 13:52:13.076: INFO: Terminating Job.batch foo pods took: 400.271772ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:52:49.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8366" for this suite. • [SLOW TEST:43.177 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":276,"skipped":4539,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:52:49.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 13:52:50.462: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 13:52:52.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208371, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208370, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 13:52:54.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208371, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729208370, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 13:52:57.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 1 13:52:57.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4738-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:52:58.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2424" for this suite. STEP: Destroying namespace "webhook-2424-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.144 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":277,"skipped":4550,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 1 13:52:58.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 1 13:53:05.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9602" for this suite. STEP: Destroying namespace "nsdeletetest-8462" for this suite. Jul 1 13:53:05.372: INFO: Namespace nsdeletetest-8462 was already deleted STEP: Destroying namespace "nsdeletetest-5795" for this suite. • [SLOW TEST:6.544 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":278,"skipped":4561,"failed":0} SSSJul 1 13:53:05.376: INFO: Running AfterSuite actions on all nodes Jul 1 13:53:05.376: INFO: Running AfterSuite actions on node 1 Jul 1 13:53:05.376: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 5076.756 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS