I0720 13:32:31.368126 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0720 13:32:31.368394 7 e2e.go:124] Starting e2e run "0eee3290-f559-4eda-8f35-b684cd40747d" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595251950 - Will randomize all specs Will run 275 of 4992 specs Jul 20 13:32:31.424: INFO: >>> kubeConfig: /root/.kube/config Jul 20 13:32:31.426: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 20 13:32:31.443: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 20 13:32:31.477: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 20 13:32:31.477: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 20 13:32:31.477: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 20 13:32:31.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 20 13:32:31.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 20 13:32:31.485: INFO: e2e test version: v1.18.5 Jul 20 13:32:31.486: INFO: kube-apiserver version: v1.18.4 Jul 20 13:32:31.486: INFO: >>> kubeConfig: /root/.kube/config Jul 20 13:32:31.489: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:32:31.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Jul 20 13:32:31.766: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-scf4 STEP: Creating a pod to test atomic-volume-subpath Jul 20 13:32:31.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-scf4" in namespace "subpath-3821" to be "Succeeded or Failed" Jul 20 13:32:31.846: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.307766ms Jul 20 13:32:35.318: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504153977s Jul 20 13:32:37.322: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.508371836s Jul 20 13:32:39.390: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.575400061s Jul 20 13:32:41.393: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 9.579284882s Jul 20 13:32:43.443: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 11.628966525s Jul 20 13:32:45.725: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 13.911165231s Jul 20 13:32:47.730: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 15.915845828s Jul 20 13:32:49.734: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 17.919562435s Jul 20 13:32:51.755: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 19.940434442s Jul 20 13:32:53.758: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 21.94413502s Jul 20 13:32:55.763: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 23.948590443s Jul 20 13:32:58.024: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 26.210089108s Jul 20 13:33:00.148: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Running", Reason="", readiness=true. Elapsed: 28.333690733s Jul 20 13:33:02.174: INFO: Pod "pod-subpath-test-secret-scf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.359771405s STEP: Saw pod success Jul 20 13:33:02.174: INFO: Pod "pod-subpath-test-secret-scf4" satisfied condition "Succeeded or Failed" Jul 20 13:33:02.177: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-scf4 container test-container-subpath-secret-scf4: STEP: delete the pod Jul 20 13:33:02.413: INFO: Waiting for pod pod-subpath-test-secret-scf4 to disappear Jul 20 13:33:02.441: INFO: Pod pod-subpath-test-secret-scf4 no longer exists STEP: Deleting pod pod-subpath-test-secret-scf4 Jul 20 13:33:02.441: INFO: Deleting pod "pod-subpath-test-secret-scf4" in namespace "subpath-3821" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:33:02.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3821" for this suite. • [SLOW TEST:30.961 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":1,"skipped":31,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:33:02.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 20 13:33:03.178: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719452 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:33:03.178: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719452 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 20 13:33:13.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719493 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:33:13.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719493 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 20 13:33:23.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719531 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:33:23.264: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719531 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 20 13:33:33.405: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719567 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:33:33.405: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-a 86f44cf7-c919-4c46-b4a5-bf129b327af6 2719567 0 2020-07-20 13:33:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 20 13:33:43.819: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-b 2ef4f3a2-2bd1-487e-a1ed-5831d48a2a23 2719630 0 2020-07-20 13:33:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:33:43.819: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-b 2ef4f3a2-2bd1-487e-a1ed-5831d48a2a23 2719630 0 2020-07-20 13:33:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 20 13:33:53.826: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-b 2ef4f3a2-2bd1-487e-a1ed-5831d48a2a23 2719683 0 2020-07-20 13:33:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:33:53.826: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6670 /api/v1/namespaces/watch-6670/configmaps/e2e-watch-test-configmap-b 2ef4f3a2-2bd1-487e-a1ed-5831d48a2a23 2719683 0 2020-07-20 13:33:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 13:33:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:34:03.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6670" for this suite. • [SLOW TEST:61.384 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":2,"skipped":40,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:34:03.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-9333/secret-test-a55fb350-5495-4f38-9491-801133f6c571 STEP: Creating a pod to test consume secrets Jul 20 13:34:04.140: INFO: Waiting up to 5m0s for pod "pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd" in namespace "secrets-9333" to be "Succeeded or Failed" Jul 20 13:34:04.180: INFO: Pod "pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 39.738294ms Jul 20 13:34:06.183: INFO: Pod "pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042595439s Jul 20 13:34:08.187: INFO: Pod "pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046896914s Jul 20 13:34:10.204: INFO: Pod "pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064056967s Jul 20 13:34:12.208: INFO: Pod "pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068015297s STEP: Saw pod success Jul 20 13:34:12.209: INFO: Pod "pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd" satisfied condition "Succeeded or Failed" Jul 20 13:34:12.211: INFO: Trying to get logs from node kali-worker pod pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd container env-test: STEP: delete the pod Jul 20 13:34:12.603: INFO: Waiting for pod pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd to disappear Jul 20 13:34:12.785: INFO: Pod pod-configmaps-38b56214-cbee-49c4-a780-0d0ee82f0ebd no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:34:12.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9333" for this suite. • [SLOW TEST:9.356 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":41,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:34:13.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Jul 20 13:34:13.566: INFO: namespace kubectl-4593 Jul 20 13:34:13.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4593' Jul 20 13:34:19.243: INFO: stderr: "" Jul 20 13:34:19.243: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 20 13:34:20.247: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 13:34:20.247: INFO: Found 0 / 1 Jul 20 13:34:21.554: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 13:34:21.554: INFO: Found 0 / 1 Jul 20 13:34:22.385: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 13:34:22.385: INFO: Found 0 / 1 Jul 20 13:34:23.246: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 13:34:23.246: INFO: Found 0 / 1 Jul 20 13:34:24.363: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 13:34:24.363: INFO: Found 0 / 1 Jul 20 13:34:25.246: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 13:34:25.246: INFO: Found 1 / 1 Jul 20 13:34:25.246: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 20 13:34:25.249: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 13:34:25.249: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 20 13:34:25.249: INFO: wait on agnhost-master startup in kubectl-4593 Jul 20 13:34:25.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs agnhost-master-jc6ch agnhost-master --namespace=kubectl-4593' Jul 20 13:34:25.351: INFO: stderr: "" Jul 20 13:34:25.352: INFO: stdout: "Paused\n" STEP: exposing RC Jul 20 13:34:25.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4593' Jul 20 13:34:25.536: INFO: stderr: "" Jul 20 13:34:25.536: INFO: stdout: "service/rm2 exposed\n" Jul 20 13:34:25.574: INFO: Service rm2 in namespace kubectl-4593 found. STEP: exposing service Jul 20 13:34:27.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4593' Jul 20 13:34:27.741: INFO: stderr: "" Jul 20 13:34:27.741: INFO: stdout: "service/rm3 exposed\n" Jul 20 13:34:28.110: INFO: Service rm3 in namespace kubectl-4593 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:34:30.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4593" for this suite. • [SLOW TEST:17.403 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":4,"skipped":42,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:34:30.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Jul 20 13:34:32.242: INFO: Waiting up to 5m0s for pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e" in namespace "var-expansion-5814" to be "Succeeded or Failed" Jul 20 13:34:33.038: INFO: Pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e": Phase="Pending", Reason="", readiness=false. Elapsed: 795.205425ms Jul 20 13:34:35.212: INFO: Pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.969334246s Jul 20 13:34:37.445: INFO: Pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.202598315s Jul 20 13:34:39.522: INFO: Pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.279446477s Jul 20 13:34:41.637: INFO: Pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e": Phase="Running", Reason="", readiness=true. Elapsed: 9.394463411s Jul 20 13:34:43.724: INFO: Pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.481861308s STEP: Saw pod success Jul 20 13:34:43.724: INFO: Pod "var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e" satisfied condition "Succeeded or Failed" Jul 20 13:34:43.728: INFO: Trying to get logs from node kali-worker2 pod var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e container dapi-container: STEP: delete the pod Jul 20 13:34:44.152: INFO: Waiting for pod var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e to disappear Jul 20 13:34:44.336: INFO: Pod var-expansion-3a3ce934-d000-42dc-ae05-766555b6872e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:34:44.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5814" for this suite. • [SLOW TEST:13.760 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":47,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:34:44.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-05dfafa7-034d-4b4c-b403-198a9d4ac8b6 STEP: Creating a pod to test consume configMaps Jul 20 13:34:45.453: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a" in namespace "projected-2577" to be "Succeeded or Failed" Jul 20 13:34:45.590: INFO: Pod "pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a": Phase="Pending", Reason="", readiness=false. Elapsed: 136.34324ms Jul 20 13:34:47.812: INFO: Pod "pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.359146035s Jul 20 13:34:49.834: INFO: Pod "pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380395897s Jul 20 13:34:52.319: INFO: Pod "pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.865399674s STEP: Saw pod success Jul 20 13:34:52.319: INFO: Pod "pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a" satisfied condition "Succeeded or Failed" Jul 20 13:34:52.322: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a container projected-configmap-volume-test: STEP: delete the pod Jul 20 13:34:53.225: INFO: Waiting for pod pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a to disappear Jul 20 13:34:53.302: INFO: Pod pod-projected-configmaps-2a19f104-97c7-4e05-adb9-e2c68d44d03a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:34:53.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2577" for this suite. • [SLOW TEST:9.582 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":48,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:34:53.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:34:54.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jul 20 13:34:55.858: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T13:34:55Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T13:34:55Z]] name:name1 resourceVersion:2720067 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:331e16e4-2e4d-4d49-9fee-09d2ce3dc39c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jul 20 13:35:05.864: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T13:35:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T13:35:05Z]] name:name2 resourceVersion:2720125 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11295b94-b492-4fa8-82b1-0da843997bed] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jul 20 13:35:16.172: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T13:34:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T13:35:15Z]] name:name1 resourceVersion:2720177 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:331e16e4-2e4d-4d49-9fee-09d2ce3dc39c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jul 20 13:35:26.179: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T13:35:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T13:35:26Z]] name:name2 resourceVersion:2720240 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11295b94-b492-4fa8-82b1-0da843997bed] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jul 20 13:35:36.188: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T13:34:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T13:35:15Z]] name:name1 resourceVersion:2720276 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:331e16e4-2e4d-4d49-9fee-09d2ce3dc39c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jul 20 13:35:46.197: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T13:35:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T13:35:26Z]] name:name2 resourceVersion:2720343 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11295b94-b492-4fa8-82b1-0da843997bed] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:35:56.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8389" for this suite. • [SLOW TEST:62.845 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":7,"skipped":58,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:35:56.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6735 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 20 13:35:57.010: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 20 13:35:57.178: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 13:35:59.806: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 13:36:01.509: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 13:36:03.182: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 13:36:05.314: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:07.183: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:09.182: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:11.183: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:13.183: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:15.265: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:17.182: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:19.182: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:21.536: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 13:36:23.691: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 20 13:36:24.511: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 20 13:36:35.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.219:8080/dial?request=hostname&protocol=http&host=10.244.2.218&port=8080&tries=1'] Namespace:pod-network-test-6735 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 13:36:35.825: INFO: >>> kubeConfig: /root/.kube/config I0720 13:36:35.855953 7 log.go:172] (0xc0025c6000) (0xc001bb86e0) Create stream I0720 13:36:35.855985 7 log.go:172] (0xc0025c6000) (0xc001bb86e0) Stream added, broadcasting: 1 I0720 13:36:35.858914 7 log.go:172] (0xc0025c6000) Reply frame received for 1 I0720 13:36:35.858977 7 log.go:172] (0xc0025c6000) (0xc0017ee320) Create stream I0720 13:36:35.858991 7 log.go:172] (0xc0025c6000) (0xc0017ee320) Stream added, broadcasting: 3 I0720 13:36:35.859827 7 log.go:172] (0xc0025c6000) Reply frame received for 3 I0720 13:36:35.859861 7 log.go:172] (0xc0025c6000) (0xc001bb8780) Create stream I0720 13:36:35.859873 7 log.go:172] (0xc0025c6000) (0xc001bb8780) Stream added, broadcasting: 5 I0720 13:36:35.860786 7 log.go:172] (0xc0025c6000) Reply frame received for 5 I0720 13:36:35.953615 7 log.go:172] (0xc0025c6000) Data frame received for 3 I0720 13:36:35.953666 7 log.go:172] (0xc0017ee320) (3) Data frame handling I0720 13:36:35.953692 7 log.go:172] (0xc0017ee320) (3) Data frame sent I0720 13:36:35.954203 7 log.go:172] (0xc0025c6000) Data frame received for 5 I0720 13:36:35.954241 7 log.go:172] (0xc001bb8780) (5) Data frame handling I0720 13:36:35.954269 7 log.go:172] (0xc0025c6000) Data frame received for 3 I0720 13:36:35.954282 7 log.go:172] (0xc0017ee320) (3) Data frame handling I0720 13:36:35.955829 7 log.go:172] (0xc0025c6000) Data frame received for 1 I0720 13:36:35.955842 7 log.go:172] (0xc001bb86e0) (1) Data frame handling I0720 13:36:35.955861 7 log.go:172] (0xc001bb86e0) (1) Data frame sent I0720 13:36:35.955877 7 log.go:172] (0xc0025c6000) (0xc001bb86e0) Stream removed, broadcasting: 1 I0720 13:36:35.956161 7 log.go:172] (0xc0025c6000) (0xc001bb86e0) Stream removed, broadcasting: 1 I0720 13:36:35.956173 7 log.go:172] (0xc0025c6000) (0xc0017ee320) Stream removed, broadcasting: 3 I0720 13:36:35.956263 7 log.go:172] (0xc0025c6000) (0xc001bb8780) Stream removed, broadcasting: 5 Jul 20 13:36:35.956: INFO: Waiting for responses: map[] Jul 20 13:36:35.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.219:8080/dial?request=hostname&protocol=http&host=10.244.1.68&port=8080&tries=1'] Namespace:pod-network-test-6735 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 13:36:35.959: INFO: >>> kubeConfig: /root/.kube/config I0720 13:36:35.989331 7 log.go:172] (0xc001c100b0) (0xc001bb8be0) Create stream I0720 13:36:35.989354 7 log.go:172] (0xc001c100b0) (0xc001bb8be0) Stream added, broadcasting: 1 I0720 13:36:35.992588 7 log.go:172] (0xc001c100b0) Reply frame received for 1 I0720 13:36:35.992625 7 log.go:172] (0xc001c100b0) (0xc001bb8c80) Create stream I0720 13:36:35.992642 7 log.go:172] (0xc001c100b0) (0xc001bb8c80) Stream added, broadcasting: 3 I0720 13:36:35.993602 7 log.go:172] (0xc001c100b0) Reply frame received for 3 I0720 13:36:35.993622 7 log.go:172] (0xc001c100b0) (0xc001f4e320) Create stream I0720 13:36:35.993633 7 log.go:172] (0xc001c100b0) (0xc001f4e320) Stream added, broadcasting: 5 I0720 13:36:35.994466 7 log.go:172] (0xc001c100b0) Reply frame received for 5 I0720 13:36:36.054574 7 log.go:172] (0xc001c100b0) Data frame received for 3 I0720 13:36:36.054614 7 log.go:172] (0xc001bb8c80) (3) Data frame handling I0720 13:36:36.054635 7 log.go:172] (0xc001bb8c80) (3) Data frame sent I0720 13:36:36.055277 7 log.go:172] (0xc001c100b0) Data frame received for 3 I0720 13:36:36.055319 7 log.go:172] (0xc001bb8c80) (3) Data frame handling I0720 13:36:36.055356 7 log.go:172] (0xc001c100b0) Data frame received for 5 I0720 13:36:36.055384 7 log.go:172] (0xc001f4e320) (5) Data frame handling I0720 13:36:36.056837 7 log.go:172] (0xc001c100b0) Data frame received for 1 I0720 13:36:36.056866 7 log.go:172] (0xc001bb8be0) (1) Data frame handling I0720 13:36:36.056890 7 log.go:172] (0xc001bb8be0) (1) Data frame sent I0720 13:36:36.056910 7 log.go:172] (0xc001c100b0) (0xc001bb8be0) Stream removed, broadcasting: 1 I0720 13:36:36.056956 7 log.go:172] (0xc001c100b0) Go away received I0720 13:36:36.057019 7 log.go:172] (0xc001c100b0) (0xc001bb8be0) Stream removed, broadcasting: 1 I0720 13:36:36.057043 7 log.go:172] (0xc001c100b0) (0xc001bb8c80) Stream removed, broadcasting: 3 I0720 13:36:36.057054 7 log.go:172] (0xc001c100b0) (0xc001f4e320) Stream removed, broadcasting: 5 Jul 20 13:36:36.057: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:36:36.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6735" for this suite. • [SLOW TEST:39.609 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":60,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:36:36.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-fdc0be61-f8ca-4319-8611-5bd947d156e4 STEP: Creating a pod to test consume secrets Jul 20 13:36:37.312: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131" in namespace "projected-3543" to be "Succeeded or Failed" Jul 20 13:36:37.488: INFO: Pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131": Phase="Pending", Reason="", readiness=false. Elapsed: 175.33763ms Jul 20 13:36:39.491: INFO: Pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1790791s Jul 20 13:36:41.866: INFO: Pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553773113s Jul 20 13:36:44.116: INFO: Pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131": Phase="Running", Reason="", readiness=true. Elapsed: 6.803642708s Jul 20 13:36:46.332: INFO: Pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131": Phase="Running", Reason="", readiness=true. Elapsed: 9.01964006s Jul 20 13:36:48.871: INFO: Pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.558703408s STEP: Saw pod success Jul 20 13:36:48.871: INFO: Pod "pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131" satisfied condition "Succeeded or Failed" Jul 20 13:36:48.960: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131 container projected-secret-volume-test: STEP: delete the pod Jul 20 13:36:49.351: INFO: Waiting for pod pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131 to disappear Jul 20 13:36:49.393: INFO: Pod pod-projected-secrets-f8bf0e63-c58c-4d05-be58-6acf406bd131 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:36:49.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3543" for this suite. • [SLOW TEST:13.215 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":72,"failed":0} [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:36:49.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:36:50.248: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 20 13:36:55.422: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 13:36:57.463: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jul 20 13:36:59.315: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7058 /apis/apps/v1/namespaces/deployment-7058/deployments/test-cleanup-deployment 2ea6ca32-5cd7-479b-a9ea-93f38ff69d8c 2720739 1 2020-07-20 13:36:58 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-07-20 13:36:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d63fb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jul 20 13:36:59.320: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-7058 /apis/apps/v1/namespaces/deployment-7058/replicasets/test-cleanup-deployment-b4867b47f 86d3283d-866d-44cf-9ddb-34edeb749c82 2720743 1 2020-07-20 13:36:58 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2ea6ca32-5cd7-479b-a9ea-93f38ff69d8c 0xc00106a5e0 0xc00106a5e1}] [] [{kube-controller-manager Update apps/v1 2020-07-20 13:36:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 101 97 54 99 97 51 50 45 53 99 100 55 45 52 55 57 98 45 97 57 101 97 45 57 51 102 51 56 102 102 54 57 100 56 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00106a658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 13:36:59.320: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 20 13:36:59.320: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7058 /apis/apps/v1/namespaces/deployment-7058/replicasets/test-cleanup-controller 82984a14-a8eb-490c-b03c-88533b659f2b 2720742 1 2020-07-20 13:36:50 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2ea6ca32-5cd7-479b-a9ea-93f38ff69d8c 0xc00106a4bf 0xc00106a4e0}] [] [{e2e.test Update apps/v1 2020-07-20 13:36:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-20 13:36:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 101 97 54 99 97 51 50 45 53 99 100 55 45 52 55 57 98 45 97 57 101 97 45 57 51 102 51 56 102 102 54 57 100 56 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00106a578 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 20 13:36:59.414: INFO: Pod "test-cleanup-controller-qvvsv" is available: &Pod{ObjectMeta:{test-cleanup-controller-qvvsv test-cleanup-controller- deployment-7058 /api/v1/namespaces/deployment-7058/pods/test-cleanup-controller-qvvsv 9fd9dc48-2ddd-4f77-84cb-a20f1b7fa8dc 2720728 0 2020-07-20 13:36:50 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 82984a14-a8eb-490c-b03c-88533b659f2b 0xc002e918d7 0xc002e918d8}] [] [{kube-controller-manager Update v1 2020-07-20 13:36:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 50 57 56 52 97 49 52 45 97 56 101 98 45 52 57 48 99 45 98 48 51 99 45 56 56 53 51 51 98 54 53 57 102 50 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:36:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvrh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvrh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvrh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:36:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:36:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.221,StartTime:2020-07-20 13:36:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:36:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3ca2b5712c75a23f0de19625e5e20af4bfcb5b4c229ada6c7d83cbf171f93bc7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:36:59.415: INFO: Pod "test-cleanup-deployment-b4867b47f-6lx26" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-6lx26 test-cleanup-deployment-b4867b47f- deployment-7058 /api/v1/namespaces/deployment-7058/pods/test-cleanup-deployment-b4867b47f-6lx26 9a00aaf5-f2bb-4545-896e-55b921322a4e 2720748 0 2020-07-20 13:36:58 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 86d3283d-866d-44cf-9ddb-34edeb749c82 0xc002e91b60 0xc002e91b61}] [] [{kube-controller-manager Update v1 2020-07-20 13:36:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 54 100 51 50 56 51 100 45 56 54 54 100 45 52 52 99 102 45 57 100 100 98 45 51 52 101 100 101 98 55 52 57 99 56 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvrh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvrh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvrh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:36:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:36:59.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7058" for this suite. • [SLOW TEST:9.969 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":10,"skipped":72,"failed":0} [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:36:59.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-9574163d-c449-4251-9b97-fcce624136cf STEP: Creating secret with name s-test-opt-upd-dbd3ba41-d99b-4199-af3f-133fb8f31c22 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9574163d-c449-4251-9b97-fcce624136cf STEP: Updating secret s-test-opt-upd-dbd3ba41-d99b-4199-af3f-133fb8f31c22 STEP: Creating secret with name s-test-opt-create-ce8562f8-8355-4768-bf12-eb0b30273866 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:38:30.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-56" for this suite. • [SLOW TEST:90.724 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:38:30.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-87936731-c177-4870-a71e-1cb67dda541a STEP: Creating a pod to test consume secrets Jul 20 13:38:31.216: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5" in namespace "projected-8761" to be "Succeeded or Failed" Jul 20 13:38:31.572: INFO: Pod "pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5": Phase="Pending", Reason="", readiness=false. Elapsed: 356.571541ms Jul 20 13:38:33.577: INFO: Pod "pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360978707s Jul 20 13:38:35.801: INFO: Pod "pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.584808481s Jul 20 13:38:37.992: INFO: Pod "pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5": Phase="Running", Reason="", readiness=true. Elapsed: 6.775792144s Jul 20 13:38:40.117: INFO: Pod "pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.901355008s STEP: Saw pod success Jul 20 13:38:40.117: INFO: Pod "pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5" satisfied condition "Succeeded or Failed" Jul 20 13:38:40.120: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5 container projected-secret-volume-test: STEP: delete the pod Jul 20 13:38:40.706: INFO: Waiting for pod pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5 to disappear Jul 20 13:38:40.739: INFO: Pod pod-projected-secrets-c0bfd127-a8b5-4637-a5a0-038bb15853e5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:38:40.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8761" for this suite. • [SLOW TEST:10.472 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":107,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:38:40.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 20 13:38:53.866: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 13:38:53.938: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 13:38:55.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 13:38:55.943: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 13:38:57.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 13:38:57.947: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 13:38:59.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 13:39:00.327: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 13:39:01.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 13:39:01.942: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 13:39:03.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 13:39:04.268: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 13:39:05.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 13:39:05.942: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:39:05.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9636" for this suite. • [SLOW TEST:25.182 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:39:05.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 20 13:39:06.283: INFO: Waiting up to 5m0s for pod "pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9" in namespace "emptydir-7007" to be "Succeeded or Failed" Jul 20 13:39:06.309: INFO: Pod "pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.087421ms Jul 20 13:39:08.321: INFO: Pod "pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038264772s Jul 20 13:39:10.536: INFO: Pod "pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253589032s Jul 20 13:39:12.843: INFO: Pod "pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.559985085s STEP: Saw pod success Jul 20 13:39:12.843: INFO: Pod "pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9" satisfied condition "Succeeded or Failed" Jul 20 13:39:12.846: INFO: Trying to get logs from node kali-worker pod pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9 container test-container: STEP: delete the pod Jul 20 13:39:13.134: INFO: Waiting for pod pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9 to disappear Jul 20 13:39:13.187: INFO: Pod pod-1dbfefa5-5a06-4a81-9dc8-cbf11d0bb4c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:39:13.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7007" for this suite. • [SLOW TEST:7.239 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":160,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:39:13.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Jul 20 13:39:14.150: INFO: Waiting up to 5m0s for pod "downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3" in namespace "downward-api-4964" to be "Succeeded or Failed" Jul 20 13:39:14.363: INFO: Pod "downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3": Phase="Pending", Reason="", readiness=false. Elapsed: 212.88511ms Jul 20 13:39:16.366: INFO: Pod "downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216561793s Jul 20 13:39:18.675: INFO: Pod "downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525281981s Jul 20 13:39:20.698: INFO: Pod "downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.548214986s STEP: Saw pod success Jul 20 13:39:20.698: INFO: Pod "downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3" satisfied condition "Succeeded or Failed" Jul 20 13:39:20.701: INFO: Trying to get logs from node kali-worker2 pod downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3 container dapi-container: STEP: delete the pod Jul 20 13:39:21.260: INFO: Waiting for pod downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3 to disappear Jul 20 13:39:21.498: INFO: Pod downward-api-ab22b890-6503-48be-ba7e-4788bb1213a3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:39:21.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4964" for this suite. • [SLOW TEST:8.711 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":165,"failed":0} [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:39:21.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 20 13:39:23.186: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created Jul 20 13:39:25.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:39:27.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:39:29.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849163, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 13:39:32.267: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:39:32.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:39:33.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8332" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.575 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":16,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:39:35.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-6d8cb7ff-2414-4e61-8c95-cda7985d4308 STEP: Creating a pod to test consume secrets Jul 20 13:39:37.059: INFO: Waiting up to 5m0s for pod "pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289" in namespace "secrets-3098" to be "Succeeded or Failed" Jul 20 13:39:37.645: INFO: Pod "pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289": Phase="Pending", Reason="", readiness=false. Elapsed: 585.422549ms Jul 20 13:39:39.770: INFO: Pod "pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.710708451s Jul 20 13:39:41.903: INFO: Pod "pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289": Phase="Pending", Reason="", readiness=false. Elapsed: 4.843453315s Jul 20 13:39:43.907: INFO: Pod "pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847711273s Jul 20 13:39:45.956: INFO: Pod "pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.896061052s STEP: Saw pod success Jul 20 13:39:45.956: INFO: Pod "pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289" satisfied condition "Succeeded or Failed" Jul 20 13:39:45.958: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289 container secret-volume-test: STEP: delete the pod Jul 20 13:39:46.022: INFO: Waiting for pod pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289 to disappear Jul 20 13:39:46.047: INFO: Pod pod-secrets-067318ae-859c-4db9-8c4b-edbb42047289 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:39:46.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3098" for this suite. • [SLOW TEST:10.569 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:39:46.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:40:04.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-603" for this suite. • [SLOW TEST:18.599 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":18,"skipped":273,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:40:04.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:40:12.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1941" for this suite. STEP: Destroying namespace "nsdeletetest-1830" for this suite. Jul 20 13:40:12.934: INFO: Namespace nsdeletetest-1830 was already deleted STEP: Destroying namespace "nsdeletetest-8549" for this suite. • [SLOW TEST:8.656 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":19,"skipped":278,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:40:13.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-788efaab-c919-4c40-be6f-c2bc843e8b5c STEP: Creating a pod to test consume secrets Jul 20 13:40:13.688: INFO: Waiting up to 5m0s for pod "pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3" in namespace "secrets-2148" to be "Succeeded or Failed" Jul 20 13:40:13.722: INFO: Pod "pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.436294ms Jul 20 13:40:16.394: INFO: Pod "pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.706243229s Jul 20 13:40:18.429: INFO: Pod "pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.741047534s Jul 20 13:40:20.459: INFO: Pod "pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3": Phase="Running", Reason="", readiness=true. Elapsed: 6.771377673s Jul 20 13:40:22.463: INFO: Pod "pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.775594349s STEP: Saw pod success Jul 20 13:40:22.463: INFO: Pod "pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3" satisfied condition "Succeeded or Failed" Jul 20 13:40:22.467: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3 container secret-volume-test: STEP: delete the pod Jul 20 13:40:22.569: INFO: Waiting for pod pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3 to disappear Jul 20 13:40:22.590: INFO: Pod pod-secrets-e6eac2f5-4e3a-4f3c-8ffe-aaf62994fcf3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:40:22.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2148" for this suite. • [SLOW TEST:9.287 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:40:22.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:40:22.946: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:40:25.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8049" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":21,"skipped":302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:40:25.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:40:25.525: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:40:32.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4397" for this suite. • [SLOW TEST:7.388 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":22,"skipped":382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:40:32.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-14710708-1c26-4f28-98c7-ef4c2ae68bfd STEP: Creating a pod to test consume configMaps Jul 20 13:40:33.037: INFO: Waiting up to 5m0s for pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1" in namespace "configmap-6910" to be "Succeeded or Failed" Jul 20 13:40:33.074: INFO: Pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1": Phase="Pending", Reason="", readiness=false. Elapsed: 37.950526ms Jul 20 13:40:35.079: INFO: Pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042828025s Jul 20 13:40:37.082: INFO: Pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045332757s Jul 20 13:40:39.311: INFO: Pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274357325s Jul 20 13:40:41.598: INFO: Pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.560957853s Jul 20 13:40:43.854: INFO: Pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.817911038s STEP: Saw pod success Jul 20 13:40:43.855: INFO: Pod "pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1" satisfied condition "Succeeded or Failed" Jul 20 13:40:43.857: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1 container configmap-volume-test: STEP: delete the pod Jul 20 13:40:44.134: INFO: Waiting for pod pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1 to disappear Jul 20 13:40:44.230: INFO: Pod pod-configmaps-996f5430-1196-44c1-b0c3-b51e51ffafb1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:40:44.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6910" for this suite. • [SLOW TEST:11.588 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":410,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:40:44.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:40:53.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6594" for this suite. • [SLOW TEST:8.866 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":427,"failed":0} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:40:53.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Jul 20 13:41:03.628: INFO: Successfully updated pod "annotationupdatebd9d8edc-7835-4cc8-9e21-15e3add1cc0e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:41:05.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8764" for this suite. • [SLOW TEST:12.876 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:41:05.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 20 13:41:06.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3252' Jul 20 13:41:06.558: INFO: stderr: "" Jul 20 13:41:06.558: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Jul 20 13:41:07.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3252' Jul 20 13:41:12.994: INFO: stderr: "" Jul 20 13:41:12.994: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:41:12.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3252" for this suite. • [SLOW TEST:7.298 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":26,"skipped":447,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:41:13.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 20 13:41:13.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023" in namespace "downward-api-5361" to be "Succeeded or Failed" Jul 20 13:41:13.755: INFO: Pod "downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023": Phase="Pending", Reason="", readiness=false. Elapsed: 64.052239ms Jul 20 13:41:15.780: INFO: Pod "downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089122611s Jul 20 13:41:18.166: INFO: Pod "downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474793759s Jul 20 13:41:20.302: INFO: Pod "downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023": Phase="Running", Reason="", readiness=true. Elapsed: 6.610632903s Jul 20 13:41:22.306: INFO: Pod "downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.614416873s STEP: Saw pod success Jul 20 13:41:22.306: INFO: Pod "downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023" satisfied condition "Succeeded or Failed" Jul 20 13:41:22.308: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023 container client-container: STEP: delete the pod Jul 20 13:41:22.390: INFO: Waiting for pod downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023 to disappear Jul 20 13:41:22.409: INFO: Pod downwardapi-volume-97f1e058-448b-4761-8f0a-ba62c3d57023 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:41:22.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5361" for this suite. • [SLOW TEST:9.136 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:41:22.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 20 13:41:22.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff" in namespace "projected-9413" to be "Succeeded or Failed" Jul 20 13:41:22.883: INFO: Pod "downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff": Phase="Pending", Reason="", readiness=false. Elapsed: 43.219922ms Jul 20 13:41:25.006: INFO: Pod "downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166684248s Jul 20 13:41:27.010: INFO: Pod "downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170575614s Jul 20 13:41:29.076: INFO: Pod "downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff": Phase="Running", Reason="", readiness=true. Elapsed: 6.23668696s Jul 20 13:41:31.080: INFO: Pod "downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239800056s STEP: Saw pod success Jul 20 13:41:31.080: INFO: Pod "downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff" satisfied condition "Succeeded or Failed" Jul 20 13:41:31.081: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff container client-container: STEP: delete the pod Jul 20 13:41:31.216: INFO: Waiting for pod downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff to disappear Jul 20 13:41:31.269: INFO: Pod downwardapi-volume-ec42bbfe-0db6-4508-b5a6-1a110e7dc9ff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:41:31.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9413" for this suite. • [SLOW TEST:8.860 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:41:31.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:41:31.475: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:41:38.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5935" for this suite. • [SLOW TEST:6.859 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":543,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:41:38.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Jul 20 13:41:38.791: INFO: PodSpec: initContainers in spec.initContainers Jul 20 13:42:33.622: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6ab4c1cc-3638-40b3-9793-588ea2d92906", GenerateName:"", Namespace:"init-container-438", SelfLink:"/api/v1/namespaces/init-container-438/pods/pod-init-6ab4c1cc-3638-40b3-9793-588ea2d92906", UID:"10334371-ce7e-4230-be25-22323b3abc14", ResourceVersion:"2722659", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730849298, loc:(*time.Location)(0x7b220e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"791486898"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00236e240), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00236e2c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00236e380), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00236e420)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nmpjm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001004080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nmpjm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nmpjm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nmpjm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002dac2a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0006f4070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dac400)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dac420)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002dac428), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002dac42c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849299, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849299, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849299, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849298, loc:(*time.Location)(0x7b220e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.1.83", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.83"}}, StartTime:(*v1.Time)(0xc00236e6c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00236e7e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0006f41c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://57885d9ee6ae7515dcb7604c5c167423acf38a2ec84be5c297f08ec6eefa67ce", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00236e900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00236e720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002dac51f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:42:33.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-438" for this suite. • [SLOW TEST:55.583 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":30,"skipped":551,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:42:33.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 13:42:34.709: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 13:42:36.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849354, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849354, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849355, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849354, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:42:38.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849354, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849354, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849355, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849354, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 13:42:41.808: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:42:42.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7272" for this suite. STEP: Destroying namespace "webhook-7272-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.042 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":31,"skipped":569,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:42:43.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 20 13:42:54.776: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6199 PodName:pod-sharedvolume-bdcc3c92-898b-4d2a-a1f1-0f1efe1225cb ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 13:42:54.776: INFO: >>> kubeConfig: /root/.kube/config I0720 13:42:54.806453 7 log.go:172] (0xc002bc9ad0) (0xc001540960) Create stream I0720 13:42:54.806477 7 log.go:172] (0xc002bc9ad0) (0xc001540960) Stream added, broadcasting: 1 I0720 13:42:54.808417 7 log.go:172] (0xc002bc9ad0) Reply frame received for 1 I0720 13:42:54.808442 7 log.go:172] (0xc002bc9ad0) (0xc001540a00) Create stream I0720 13:42:54.808448 7 log.go:172] (0xc002bc9ad0) (0xc001540a00) Stream added, broadcasting: 3 I0720 13:42:54.809416 7 log.go:172] (0xc002bc9ad0) Reply frame received for 3 I0720 13:42:54.809443 7 log.go:172] (0xc002bc9ad0) (0xc001461220) Create stream I0720 13:42:54.809455 7 log.go:172] (0xc002bc9ad0) (0xc001461220) Stream added, broadcasting: 5 I0720 13:42:54.810106 7 log.go:172] (0xc002bc9ad0) Reply frame received for 5 I0720 13:42:54.882635 7 log.go:172] (0xc002bc9ad0) Data frame received for 3 I0720 13:42:54.882654 7 log.go:172] (0xc001540a00) (3) Data frame handling I0720 13:42:54.882673 7 log.go:172] (0xc002bc9ad0) Data frame received for 5 I0720 13:42:54.882700 7 log.go:172] (0xc001461220) (5) Data frame handling I0720 13:42:54.882765 7 log.go:172] (0xc001540a00) (3) Data frame sent I0720 13:42:54.882794 7 log.go:172] (0xc002bc9ad0) Data frame received for 3 I0720 13:42:54.882810 7 log.go:172] (0xc001540a00) (3) Data frame handling I0720 13:42:54.883955 7 log.go:172] (0xc002bc9ad0) Data frame received for 1 I0720 13:42:54.883974 7 log.go:172] (0xc001540960) (1) Data frame handling I0720 13:42:54.883983 7 log.go:172] (0xc001540960) (1) Data frame sent I0720 13:42:54.883994 7 log.go:172] (0xc002bc9ad0) (0xc001540960) Stream removed, broadcasting: 1 I0720 13:42:54.884011 7 log.go:172] (0xc002bc9ad0) Go away received I0720 13:42:54.884104 7 log.go:172] (0xc002bc9ad0) (0xc001540960) Stream removed, broadcasting: 1 I0720 13:42:54.884120 7 log.go:172] (0xc002bc9ad0) (0xc001540a00) Stream removed, broadcasting: 3 I0720 13:42:54.884132 7 log.go:172] (0xc002bc9ad0) (0xc001461220) Stream removed, broadcasting: 5 Jul 20 13:42:54.884: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:42:54.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6199" for this suite. • [SLOW TEST:11.128 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":32,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:42:54.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-31b3cbb6-aeaf-45ca-9dce-7abdefd94884 STEP: Creating a pod to test consume configMaps Jul 20 13:42:55.278: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34" in namespace "projected-2613" to be "Succeeded or Failed" Jul 20 13:42:55.472: INFO: Pod "pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34": Phase="Pending", Reason="", readiness=false. Elapsed: 194.682436ms Jul 20 13:42:57.784: INFO: Pod "pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505975287s Jul 20 13:42:59.963: INFO: Pod "pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.685659688s Jul 20 13:43:02.586: INFO: Pod "pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34": Phase="Pending", Reason="", readiness=false. Elapsed: 7.307858622s Jul 20 13:43:04.719: INFO: Pod "pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.440932678s STEP: Saw pod success Jul 20 13:43:04.719: INFO: Pod "pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34" satisfied condition "Succeeded or Failed" Jul 20 13:43:04.721: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34 container projected-configmap-volume-test: STEP: delete the pod Jul 20 13:43:04.845: INFO: Waiting for pod pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34 to disappear Jul 20 13:43:04.880: INFO: Pod pod-projected-configmaps-2526ded7-0fe5-466c-90cb-4ae435353f34 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:43:04.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2613" for this suite. • [SLOW TEST:9.997 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:43:04.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 20 13:43:05.324: INFO: Waiting up to 5m0s for pod "pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c" in namespace "emptydir-2563" to be "Succeeded or Failed" Jul 20 13:43:05.347: INFO: Pod "pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.870696ms Jul 20 13:43:07.351: INFO: Pod "pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027239726s Jul 20 13:43:09.467: INFO: Pod "pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143208471s Jul 20 13:43:11.670: INFO: Pod "pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346727444s Jul 20 13:43:13.674: INFO: Pod "pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.350870717s STEP: Saw pod success Jul 20 13:43:13.674: INFO: Pod "pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c" satisfied condition "Succeeded or Failed" Jul 20 13:43:13.677: INFO: Trying to get logs from node kali-worker2 pod pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c container test-container: STEP: delete the pod Jul 20 13:43:13.762: INFO: Waiting for pod pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c to disappear Jul 20 13:43:13.768: INFO: Pod pod-79ae5e37-b525-4c00-ab45-3dfe9d0f432c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:43:13.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2563" for this suite. • [SLOW TEST:8.890 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:43:13.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-924100e0-f078-4d64-b862-4bdfb8b0d95f STEP: Creating a pod to test consume configMaps Jul 20 13:43:13.966: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9" in namespace "projected-9146" to be "Succeeded or Failed" Jul 20 13:43:13.985: INFO: Pod "pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.175435ms Jul 20 13:43:15.989: INFO: Pod "pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022435304s Jul 20 13:43:17.999: INFO: Pod "pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032853874s Jul 20 13:43:20.245: INFO: Pod "pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.278758822s STEP: Saw pod success Jul 20 13:43:20.245: INFO: Pod "pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9" satisfied condition "Succeeded or Failed" Jul 20 13:43:20.248: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9 container projected-configmap-volume-test: STEP: delete the pod Jul 20 13:43:21.180: INFO: Waiting for pod pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9 to disappear Jul 20 13:43:21.234: INFO: Pod pod-projected-configmaps-65be49e6-3fb1-4c86-97bf-fd9a74f8bca9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:43:21.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9146" for this suite. • [SLOW TEST:7.666 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":686,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:43:21.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:43:21.904: INFO: Creating deployment "webserver-deployment" Jul 20 13:43:22.060: INFO: Waiting for observed generation 1 Jul 20 13:43:24.138: INFO: Waiting for all required pods to come up Jul 20 13:43:24.142: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 20 13:43:38.342: INFO: Waiting for deployment "webserver-deployment" to complete Jul 20 13:43:38.347: INFO: Updating deployment "webserver-deployment" with a non-existent image Jul 20 13:43:38.353: INFO: Updating deployment webserver-deployment Jul 20 13:43:38.353: INFO: Waiting for observed generation 2 Jul 20 13:43:42.037: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 20 13:43:42.618: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 20 13:43:42.622: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 20 13:43:43.388: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 20 13:43:43.388: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 20 13:43:43.833: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 20 13:43:43.873: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jul 20 13:43:43.873: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jul 20 13:43:44.182: INFO: Updating deployment webserver-deployment Jul 20 13:43:44.182: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jul 20 13:43:44.268: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 20 13:43:44.343: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jul 20 13:43:44.655: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7696 /apis/apps/v1/namespaces/deployment-7696/deployments/webserver-deployment 38e06f35-5b5d-4aa5-86ce-4819697bf009 2723279 3 2020-07-20 13:43:21 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cb0538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-07-20 13:43:40 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-20 13:43:44 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jul 20 13:43:44.777: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-7696 /apis/apps/v1/namespaces/deployment-7696/replicasets/webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 2723319 3 2020-07-20 13:43:38 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 38e06f35-5b5d-4aa5-86ce-4819697bf009 0xc002cb09c7 0xc002cb09c8}] [] [{kube-controller-manager Update apps/v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 56 101 48 54 102 51 53 45 53 98 53 100 45 52 97 97 53 45 56 54 99 101 45 52 56 49 57 54 57 55 98 102 48 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cb0a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 13:43:44.777: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jul 20 13:43:44.777: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-7696 /apis/apps/v1/namespaces/deployment-7696/replicasets/webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 2723305 3 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 38e06f35-5b5d-4aa5-86ce-4819697bf009 0xc002cb0aa7 0xc002cb0aa8}] [] [{kube-controller-manager Update apps/v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 56 101 48 54 102 51 53 45 53 98 53 100 45 52 97 97 53 45 56 54 99 101 45 52 56 49 57 54 57 55 98 102 48 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cb0b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jul 20 13:43:44.870: INFO: Pod "webserver-deployment-6676bcd6d4-2zlrg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2zlrg webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-2zlrg 042510da-39ed-4882-8e96-75a372bdee57 2723215 0 2020-07-20 13:43:39 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1057 0xc002cb1058}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-20 13:43:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.871: INFO: Pod "webserver-deployment-6676bcd6d4-4gx5x" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4gx5x webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-4gx5x 9d83725f-e6d6-4d52-92c7-65b6f1f3205c 2723317 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1207 0xc002cb1208}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.871: INFO: Pod "webserver-deployment-6676bcd6d4-6stlm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6stlm webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-6stlm 1019c94e-6904-4c59-8d5c-90fb56179642 2723231 0 2020-07-20 13:43:39 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1347 0xc002cb1348}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-20 13:43:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.871: INFO: Pod "webserver-deployment-6676bcd6d4-7zqdd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7zqdd webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-7zqdd 955ec73f-6c66-49c7-bf68-1a98c29b424e 2723328 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb14f7 0xc002cb14f8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-20 13:43:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.872: INFO: Pod "webserver-deployment-6676bcd6d4-8fxgn" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8fxgn webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-8fxgn 628b26f2-50a4-4a5a-801e-8500159cd9d1 2723310 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb16a7 0xc002cb16a8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.872: INFO: Pod "webserver-deployment-6676bcd6d4-8wc9h" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8wc9h webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-8wc9h c66659e9-3941-48c9-8024-4c705578ef92 2723307 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb17e7 0xc002cb17e8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.873: INFO: Pod "webserver-deployment-6676bcd6d4-9df8v" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9df8v webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-9df8v 4752f5ed-d624-44b6-a462-45ef3eee293e 2723241 0 2020-07-20 13:43:39 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1927 0xc002cb1928}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-20 13:43:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.873: INFO: Pod "webserver-deployment-6676bcd6d4-hv6cm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hv6cm webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-hv6cm 36a42339-67d9-4d02-af81-2dab35accefc 2723311 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1ad7 0xc002cb1ad8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.873: INFO: Pod "webserver-deployment-6676bcd6d4-n444c" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-n444c webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-n444c 14f9e8e4-2a88-4733-a8ca-c833a67836a4 2723302 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1c17 0xc002cb1c18}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.873: INFO: Pod "webserver-deployment-6676bcd6d4-q8b54" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-q8b54 webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-q8b54 85cfb244-c5d0-4df7-a9f3-7200245c5ece 2723291 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1d57 0xc002cb1d58}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.874: INFO: Pod "webserver-deployment-6676bcd6d4-sb9sl" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sb9sl webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-sb9sl 59545c99-5fcf-4d50-a135-1962f7caa01f 2723207 0 2020-07-20 13:43:39 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002cb1e97 0xc002cb1e98}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-20 13:43:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.874: INFO: Pod "webserver-deployment-6676bcd6d4-vvmv2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vvmv2 webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-vvmv2 88ca91f3-d7d0-42f3-b324-990c1bd8b142 2723290 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002df4047 0xc002df4048}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.874: INFO: Pod "webserver-deployment-6676bcd6d4-zgfjj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zgfjj webserver-deployment-6676bcd6d4- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-6676bcd6d4-zgfjj e0be4b63-e966-4bda-9878-948b9c00da37 2723235 0 2020-07-20 13:43:39 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca6b1a1e-1997-4429-a252-de68304daa75 0xc002df4187 0xc002df4188}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 54 98 49 97 49 101 45 49 57 57 55 45 52 52 50 57 45 97 50 53 50 45 100 101 54 56 51 48 52 100 97 97 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-20 13:43:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.874: INFO: Pod "webserver-deployment-84855cf797-2qr9b" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2qr9b webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-2qr9b 5d05b564-f96f-40c4-a093-a339029dfbc7 2723149 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df4337 0xc002df4338}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.90,StartTime:2020-07-20 13:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7f21128baf82ae6fc507aaee06f11d31295a28ab9126f79e37b762794f0f4482,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.875: INFO: Pod "webserver-deployment-84855cf797-47xkb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-47xkb webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-47xkb e41bba2e-64af-49c5-9ced-b10ebdc49285 2723289 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df44e7 0xc002df44e8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.875: INFO: Pod "webserver-deployment-84855cf797-4wqh6" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4wqh6 webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-4wqh6 721cd16a-7a95-4273-abbe-6131a3333faa 2723281 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df4617 0xc002df4618}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.875: INFO: Pod "webserver-deployment-84855cf797-7rtvr" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7rtvr webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-7rtvr 5d22de25-82aa-4fbd-8050-5586df54d7a2 2723166 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df4747 0xc002df4748}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 51 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.238,StartTime:2020-07-20 13:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ccc53a7d8da21a588057e521f50bb2b21245614005c99bcf6a6db4b0fc5035a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.875: INFO: Pod "webserver-deployment-84855cf797-8gpcc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8gpcc webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-8gpcc 39d46e66-e905-4310-b5ee-920ecef9c261 2723299 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df48f7 0xc002df48f8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.876: INFO: Pod "webserver-deployment-84855cf797-fcnn2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fcnn2 webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-fcnn2 900583b3-ff50-4920-87d1-27b961852b0c 2723313 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df4a27 0xc002df4a28}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.876: INFO: Pod "webserver-deployment-84855cf797-h4hp4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h4hp4 webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-h4hp4 042d482e-a5de-428b-b286-80dbe5fcb1bd 2723121 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df4b57 0xc002df4b58}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 51 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.235,StartTime:2020-07-20 13:43:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf8b5f800e5f799c37f90b15f55c5433bf06f0d0184fa6d07da3395777f84cf0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.876: INFO: Pod "webserver-deployment-84855cf797-h7mcg" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h7mcg webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-h7mcg ff54c0f1-84b0-4c82-9e82-d8e3a07fc6c4 2723144 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df4d07 0xc002df4d08}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.92,StartTime:2020-07-20 13:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3ea17b98f66b5bf677f3f670250ca3b850fa9e27153c9f4d92f6ccc15d54fef9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.876: INFO: Pod "webserver-deployment-84855cf797-jr4gj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jr4gj webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-jr4gj 40e40ddd-bf92-49e2-b598-329fb50f8db5 2723140 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df4eb7 0xc002df4eb8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 51 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.236,StartTime:2020-07-20 13:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://72377db3c9b313b1e9c1880fa0bac246321400efcf1cafc8b06719282e701cc8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.877: INFO: Pod "webserver-deployment-84855cf797-m47gn" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m47gn webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-m47gn c82ee8b9-da33-4dc9-83c4-eaf56d91701a 2723105 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5067 0xc002df5068}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 56 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.89,StartTime:2020-07-20 13:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0062742c6424d7634fd9c8e5a326a138de4360aa26d9a02d582fac8cea3ac4ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.877: INFO: Pod "webserver-deployment-84855cf797-nfd5j" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nfd5j webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-nfd5j dd798bc5-7a74-40bd-98eb-f4e1f9771af9 2723314 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5217 0xc002df5218}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.877: INFO: Pod "webserver-deployment-84855cf797-p2fmc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-p2fmc webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-p2fmc 75390e09-78e0-4416-b954-6e7492aecdc6 2723301 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5347 0xc002df5348}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-20 13:43:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.877: INFO: Pod "webserver-deployment-84855cf797-phj6h" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-phj6h webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-phj6h c2dd0ec5-34ec-4ec5-a984-32385f9d910e 2723094 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df54d7 0xc002df54d8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 56 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.88,StartTime:2020-07-20 13:43:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6b6d086fdb2498eb0a1964b09e6c5d1e323ab2d06cf077fd5adc6a5c9e3374e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.878: INFO: Pod "webserver-deployment-84855cf797-qj4zg" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qj4zg webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-qj4zg 713f49f8-9464-4f1c-a9fc-6dad0ee66af5 2723160 0 2020-07-20 13:43:22 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5687 0xc002df5688}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.91,StartTime:2020-07-20 13:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 13:43:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bc330c51a886bafa572386efa34a4efa84b35df33ff85208893c8878826ee49b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.878: INFO: Pod "webserver-deployment-84855cf797-qjj2w" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qjj2w webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-qjj2w 4e92294f-0c86-4795-91da-56562e36d39b 2723312 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5837 0xc002df5838}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.878: INFO: Pod "webserver-deployment-84855cf797-rcsqr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rcsqr webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-rcsqr 6382a91f-e86c-44b7-ac66-c326ccd72e81 2723306 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5967 0xc002df5968}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.879: INFO: Pod "webserver-deployment-84855cf797-tpw9k" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tpw9k webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-tpw9k e05f9351-033e-42c5-ab9a-f748d9d80896 2723293 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5a97 0xc002df5a98}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.879: INFO: Pod "webserver-deployment-84855cf797-w9cb7" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-w9cb7 webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-w9cb7 a87844e7-2a3d-4233-be32-2dd4f4ce33ab 2723284 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5bc7 0xc002df5bc8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.879: INFO: Pod "webserver-deployment-84855cf797-wz5np" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wz5np webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-wz5np e6b9f998-b0ab-4a73-93bc-23e9535e3bb4 2723308 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5cf7 0xc002df5cf8}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 13:43:44.880: INFO: Pod "webserver-deployment-84855cf797-zpvdb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zpvdb webserver-deployment-84855cf797- deployment-7696 /api/v1/namespaces/deployment-7696/pods/webserver-deployment-84855cf797-zpvdb d97664e3-4d9f-4d93-9e21-d93efbb2fdb5 2723327 0 2020-07-20 13:43:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d578a112-95d9-46ed-89eb-8ab880bb91e3 0xc002df5e27 0xc002df5e28}] [] [{kube-controller-manager Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 55 56 97 49 49 50 45 57 53 100 57 45 52 54 101 100 45 56 57 101 98 45 56 97 98 56 56 48 98 98 57 49 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:43:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmgsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmgsj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:43:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-20 13:43:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:43:44.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7696" for this suite. • [SLOW TEST:23.796 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":36,"skipped":698,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:43:45.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-1c467334-2633-4866-8dc6-2dc4b2dd1a96 STEP: Creating a pod to test consume configMaps Jul 20 13:43:46.808: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5" in namespace "configmap-5100" to be "Succeeded or Failed" Jul 20 13:43:47.036: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 227.433766ms Jul 20 13:43:49.683: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874763331s Jul 20 13:43:51.802: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.99312751s Jul 20 13:43:53.875: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.066472847s Jul 20 13:43:56.329: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.520660347s Jul 20 13:43:58.875: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065993721s Jul 20 13:44:01.196: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.3871549s Jul 20 13:44:03.745: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.936460991s Jul 20 13:44:05.766: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.957418011s Jul 20 13:44:07.920: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.111630691s Jul 20 13:44:10.120: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.311596006s STEP: Saw pod success Jul 20 13:44:10.120: INFO: Pod "pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5" satisfied condition "Succeeded or Failed" Jul 20 13:44:10.208: INFO: Trying to get logs from node kali-worker pod pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5 container configmap-volume-test: STEP: delete the pod Jul 20 13:44:10.847: INFO: Waiting for pod pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5 to disappear Jul 20 13:44:10.994: INFO: Pod pod-configmaps-cdfa38e5-b103-40e7-8077-002447e2dbd5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:44:10.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5100" for this suite. • [SLOW TEST:26.018 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:44:11.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Jul 20 13:44:11.579: INFO: Waiting up to 5m0s for pod "downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0" in namespace "downward-api-811" to be "Succeeded or Failed" Jul 20 13:44:11.677: INFO: Pod "downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 98.300278ms Jul 20 13:44:14.193: INFO: Pod "downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.614046018s Jul 20 13:44:16.462: INFO: Pod "downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.883525689s Jul 20 13:44:18.881: INFO: Pod "downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.301833401s STEP: Saw pod success Jul 20 13:44:18.881: INFO: Pod "downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0" satisfied condition "Succeeded or Failed" Jul 20 13:44:19.626: INFO: Trying to get logs from node kali-worker pod downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0 container dapi-container: STEP: delete the pod Jul 20 13:44:21.462: INFO: Waiting for pod downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0 to disappear Jul 20 13:44:21.492: INFO: Pod downward-api-ea8e35be-959e-41f2-8c5b-dcad1073fbb0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:44:21.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-811" for this suite. • [SLOW TEST:10.302 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":722,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:44:21.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:45:24.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9280" for this suite. • [SLOW TEST:63.293 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:45:24.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 20 13:45:25.106: INFO: Waiting up to 5m0s for pod "pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2" in namespace "emptydir-9796" to be "Succeeded or Failed" Jul 20 13:45:25.142: INFO: Pod "pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.749882ms Jul 20 13:45:27.254: INFO: Pod "pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147903208s Jul 20 13:45:29.289: INFO: Pod "pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183438779s Jul 20 13:45:31.593: INFO: Pod "pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.487452828s STEP: Saw pod success Jul 20 13:45:31.593: INFO: Pod "pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2" satisfied condition "Succeeded or Failed" Jul 20 13:45:31.596: INFO: Trying to get logs from node kali-worker2 pod pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2 container test-container: STEP: delete the pod Jul 20 13:45:31.848: INFO: Waiting for pod pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2 to disappear Jul 20 13:45:31.971: INFO: Pod pod-4f5a3072-ce84-411a-9821-a6b4462fe0e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:45:31.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9796" for this suite. • [SLOW TEST:7.122 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":758,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:45:31.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:45:32.401: INFO: Creating deployment "test-recreate-deployment" Jul 20 13:45:32.432: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 20 13:45:32.532: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jul 20 13:45:34.766: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 20 13:45:34.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:45:36.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:45:38.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849532, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:45:40.773: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 20 13:45:40.785: INFO: Updating deployment test-recreate-deployment Jul 20 13:45:40.785: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jul 20 13:45:42.876: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3423 /apis/apps/v1/namespaces/deployment-3423/deployments/test-recreate-deployment 9706214a-2ed3-4203-ae19-cd21276b770e 2724143 2 2020-07-20 13:45:32 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-20 13:45:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-20 13:45:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ec978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-20 13:45:42 +0000 UTC,LastTransitionTime:2020-07-20 13:45:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-07-20 13:45:42 +0000 UTC,LastTransitionTime:2020-07-20 13:45:32 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jul 20 13:45:43.122: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-3423 /apis/apps/v1/namespaces/deployment-3423/replicasets/test-recreate-deployment-d5667d9c7 6ffd510c-8aac-406a-99f6-db3cc00571c8 2724140 1 2020-07-20 13:45:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9706214a-2ed3-4203-ae19-cd21276b770e 0xc0027ed090 0xc0027ed091}] [] [{kube-controller-manager Update apps/v1 2020-07-20 13:45:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 55 48 54 50 49 52 97 45 50 101 100 51 45 52 50 48 51 45 97 101 49 57 45 99 100 50 49 50 55 54 98 55 55 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ed108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 13:45:43.122: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 20 13:45:43.122: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-3423 /apis/apps/v1/namespaces/deployment-3423/replicasets/test-recreate-deployment-74d98b5f7c 385ca918-4ebe-4009-8d2a-42bf0ecc9f1d 2724129 2 2020-07-20 13:45:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9706214a-2ed3-4203-ae19-cd21276b770e 0xc0027ecf97 0xc0027ecf98}] [] [{kube-controller-manager Update apps/v1 2020-07-20 13:45:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 55 48 54 50 49 52 97 45 50 101 100 51 45 52 50 48 51 45 97 101 49 57 45 99 100 50 49 50 55 54 98 55 55 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ed028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 13:45:43.126: INFO: Pod "test-recreate-deployment-d5667d9c7-wzslm" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-wzslm test-recreate-deployment-d5667d9c7- deployment-3423 /api/v1/namespaces/deployment-3423/pods/test-recreate-deployment-d5667d9c7-wzslm 20e264f2-ec1a-4b0e-aeac-0c7e41c60710 2724144 0 2020-07-20 13:45:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 6ffd510c-8aac-406a-99f6-db3cc00571c8 0xc0027ed6f0 0xc0027ed6f1}] [] [{kube-controller-manager Update v1 2020-07-20 13:45:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 102 102 100 53 49 48 99 45 56 97 97 99 45 52 48 54 97 45 57 57 102 54 45 100 98 51 99 99 48 48 53 55 49 99 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 13:45:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8n9pb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8n9pb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8n9pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:45:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:45:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:45:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 13:45:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-20 13:45:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:45:43.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3423" for this suite. • [SLOW TEST:11.232 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":41,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:45:43.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 20 13:45:43.941: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 20 13:45:48.979: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:45:49.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9117" for this suite. • [SLOW TEST:6.443 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":42,"skipped":804,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:45:49.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Jul 20 13:45:50.283: INFO: Waiting up to 5m0s for pod "var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46" in namespace "var-expansion-5685" to be "Succeeded or Failed" Jul 20 13:45:50.368: INFO: Pod "var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46": Phase="Pending", Reason="", readiness=false. Elapsed: 85.295836ms Jul 20 13:45:52.480: INFO: Pod "var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197131747s Jul 20 13:45:54.484: INFO: Pod "var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201255468s Jul 20 13:45:56.900: INFO: Pod "var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46": Phase="Running", Reason="", readiness=true. Elapsed: 6.61704285s Jul 20 13:45:58.941: INFO: Pod "var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.658467395s STEP: Saw pod success Jul 20 13:45:58.941: INFO: Pod "var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46" satisfied condition "Succeeded or Failed" Jul 20 13:45:58.944: INFO: Trying to get logs from node kali-worker pod var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46 container dapi-container: STEP: delete the pod Jul 20 13:45:59.434: INFO: Waiting for pod var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46 to disappear Jul 20 13:45:59.438: INFO: Pod var-expansion-0a4b5573-76f8-42e0-81c4-9c2e063e7f46 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:45:59.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5685" for this suite. • [SLOW TEST:9.841 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":804,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:45:59.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2112 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2112;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2112 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2112;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2112.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2112.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2112.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2112.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2112.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2112.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2112.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.84.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.84.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.84.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.84.154_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2112 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2112;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2112 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2112;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2112.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2112.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2112.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2112.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2112.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2112.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2112.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2112.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2112.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.84.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.84.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.84.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.84.154_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 13:46:19.602: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:19.681: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:19.685: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.092: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.517: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.524: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.746: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.749: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.753: INFO: Unable to read jessie_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.755: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.759: INFO: Unable to read jessie_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.762: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.765: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.767: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:20.788: INFO: Lookups using dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2112 wheezy_tcp@dns-test-service.dns-2112 wheezy_udp@dns-test-service.dns-2112.svc wheezy_tcp@dns-test-service.dns-2112.svc wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2112 jessie_tcp@dns-test-service.dns-2112 jessie_udp@dns-test-service.dns-2112.svc jessie_tcp@dns-test-service.dns-2112.svc jessie_udp@_http._tcp.dns-test-service.dns-2112.svc jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc] Jul 20 13:46:25.793: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.796: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.799: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.803: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.806: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.809: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.815: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.833: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.836: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.839: INFO: Unable to read jessie_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.845: INFO: Unable to read jessie_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.851: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:25.874: INFO: Lookups using dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2112 wheezy_tcp@dns-test-service.dns-2112 wheezy_udp@dns-test-service.dns-2112.svc wheezy_tcp@dns-test-service.dns-2112.svc wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2112 jessie_tcp@dns-test-service.dns-2112 jessie_udp@dns-test-service.dns-2112.svc jessie_tcp@dns-test-service.dns-2112.svc jessie_udp@_http._tcp.dns-test-service.dns-2112.svc jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc] Jul 20 13:46:31.020: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.024: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.028: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.031: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.034: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.038: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.040: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.289: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.370: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.373: INFO: Unable to read jessie_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.835: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:31.839: INFO: Unable to read jessie_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:32.618: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:32.942: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:32.946: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:32.964: INFO: Lookups using dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2112 wheezy_tcp@dns-test-service.dns-2112 wheezy_udp@dns-test-service.dns-2112.svc wheezy_tcp@dns-test-service.dns-2112.svc wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2112 jessie_tcp@dns-test-service.dns-2112 jessie_udp@dns-test-service.dns-2112.svc jessie_tcp@dns-test-service.dns-2112.svc jessie_udp@_http._tcp.dns-test-service.dns-2112.svc jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc] Jul 20 13:46:35.978: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:35.981: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:36.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.008: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.011: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.015: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.022: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.054: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.057: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.059: INFO: Unable to read jessie_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.062: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.064: INFO: Unable to read jessie_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.067: INFO: Unable to read jessie_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.070: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:37.073: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:38.593: INFO: Lookups using dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2112 wheezy_tcp@dns-test-service.dns-2112 wheezy_udp@dns-test-service.dns-2112.svc wheezy_tcp@dns-test-service.dns-2112.svc wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2112 jessie_tcp@dns-test-service.dns-2112 jessie_udp@dns-test-service.dns-2112.svc jessie_tcp@dns-test-service.dns-2112.svc jessie_udp@_http._tcp.dns-test-service.dns-2112.svc jessie_tcp@_http._tcp.dns-test-service.dns-2112.svc] Jul 20 13:46:40.792: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:40.795: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:40.854: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:40.857: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112 from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:40.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:40.862: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:40.864: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:40.867: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc from pod dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82: the server could not find the requested resource (get pods dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82) Jul 20 13:46:43.335: INFO: Lookups using dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2112 wheezy_tcp@dns-test-service.dns-2112 wheezy_udp@dns-test-service.dns-2112.svc wheezy_tcp@dns-test-service.dns-2112.svc wheezy_udp@_http._tcp.dns-test-service.dns-2112.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2112.svc] Jul 20 13:46:47.748: INFO: DNS probes using dns-2112/dns-test-0c70fa27-529d-465d-846f-fc7ca7c4ef82 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:46:53.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2112" for this suite. • [SLOW TEST:53.936 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":44,"skipped":806,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:46:53.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 13:46:53.694: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8805 I0720 13:46:53.737846 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8805, replica count: 1 I0720 13:46:54.788224 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:46:55.788475 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:46:56.788787 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:46:57.789011 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:46:58.789242 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:46:59.789431 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:47:00.789688 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:47:01.789918 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:47:02.790150 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 13:47:03.790334 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 13:47:04.011: INFO: Created: latency-svc-grdr9 Jul 20 13:47:04.048: INFO: Got endpoints: latency-svc-grdr9 [157.491425ms] Jul 20 13:47:04.229: INFO: Created: latency-svc-sbfp7 Jul 20 13:47:04.325: INFO: Got endpoints: latency-svc-sbfp7 [277.367764ms] Jul 20 13:47:05.083: INFO: Created: latency-svc-n6dgt Jul 20 13:47:05.126: INFO: Got endpoints: latency-svc-n6dgt [1.077940294s] Jul 20 13:47:05.458: INFO: Created: latency-svc-7j7sv Jul 20 13:47:05.478: INFO: Got endpoints: latency-svc-7j7sv [1.429970913s] Jul 20 13:47:05.918: INFO: Created: latency-svc-9vt5n Jul 20 13:47:05.965: INFO: Got endpoints: latency-svc-9vt5n [1.917187922s] Jul 20 13:47:06.090: INFO: Created: latency-svc-2vknb Jul 20 13:47:06.105: INFO: Got endpoints: latency-svc-2vknb [2.056627504s] Jul 20 13:47:06.278: INFO: Created: latency-svc-8sdhg Jul 20 13:47:06.322: INFO: Got endpoints: latency-svc-8sdhg [2.273673925s] Jul 20 13:47:06.357: INFO: Created: latency-svc-gs4zv Jul 20 13:47:06.463: INFO: Got endpoints: latency-svc-gs4zv [2.415392904s] Jul 20 13:47:06.467: INFO: Created: latency-svc-46cqw Jul 20 13:47:06.516: INFO: Got endpoints: latency-svc-46cqw [2.467780095s] Jul 20 13:47:06.637: INFO: Created: latency-svc-bl4xd Jul 20 13:47:06.685: INFO: Got endpoints: latency-svc-bl4xd [2.636682269s] Jul 20 13:47:06.823: INFO: Created: latency-svc-g9jxg Jul 20 13:47:07.283: INFO: Got endpoints: latency-svc-g9jxg [3.23539031s] Jul 20 13:47:07.507: INFO: Created: latency-svc-hd667 Jul 20 13:47:08.122: INFO: Got endpoints: latency-svc-hd667 [4.073804535s] Jul 20 13:47:08.414: INFO: Created: latency-svc-l5brr Jul 20 13:47:08.475: INFO: Got endpoints: latency-svc-l5brr [4.42718353s] Jul 20 13:47:08.607: INFO: Created: latency-svc-ll47j Jul 20 13:47:08.649: INFO: Got endpoints: latency-svc-ll47j [4.60091181s] Jul 20 13:47:08.817: INFO: Created: latency-svc-v5j7j Jul 20 13:47:08.841: INFO: Got endpoints: latency-svc-v5j7j [4.792631756s] Jul 20 13:47:09.207: INFO: Created: latency-svc-5fcn4 Jul 20 13:47:09.278: INFO: Got endpoints: latency-svc-5fcn4 [5.230446858s] Jul 20 13:47:09.494: INFO: Created: latency-svc-xt8hj Jul 20 13:47:09.499: INFO: Got endpoints: latency-svc-xt8hj [5.173734438s] Jul 20 13:47:09.581: INFO: Created: latency-svc-4m98j Jul 20 13:47:09.589: INFO: Got endpoints: latency-svc-4m98j [4.463709331s] Jul 20 13:47:09.691: INFO: Created: latency-svc-dctbk Jul 20 13:47:09.701: INFO: Got endpoints: latency-svc-dctbk [4.223315902s] Jul 20 13:47:09.739: INFO: Created: latency-svc-wp2jv Jul 20 13:47:09.772: INFO: Got endpoints: latency-svc-wp2jv [3.806718359s] Jul 20 13:47:09.858: INFO: Created: latency-svc-cdk6c Jul 20 13:47:09.882: INFO: Got endpoints: latency-svc-cdk6c [3.777637021s] Jul 20 13:47:09.927: INFO: Created: latency-svc-wcfrg Jul 20 13:47:09.952: INFO: Got endpoints: latency-svc-wcfrg [3.629846252s] Jul 20 13:47:10.087: INFO: Created: latency-svc-wxb4s Jul 20 13:47:10.155: INFO: Got endpoints: latency-svc-wxb4s [3.691821908s] Jul 20 13:47:10.327: INFO: Created: latency-svc-5fjff Jul 20 13:47:10.419: INFO: Got endpoints: latency-svc-5fjff [3.903051658s] Jul 20 13:47:10.420: INFO: Created: latency-svc-sbwds Jul 20 13:47:10.546: INFO: Got endpoints: latency-svc-sbwds [3.861683243s] Jul 20 13:47:10.715: INFO: Created: latency-svc-4p7v8 Jul 20 13:47:10.773: INFO: Got endpoints: latency-svc-4p7v8 [3.489385369s] Jul 20 13:47:10.883: INFO: Created: latency-svc-dxfcc Jul 20 13:47:10.886: INFO: Got endpoints: latency-svc-dxfcc [2.764305374s] Jul 20 13:47:11.188: INFO: Created: latency-svc-cwzmr Jul 20 13:47:11.193: INFO: Got endpoints: latency-svc-cwzmr [2.717465612s] Jul 20 13:47:12.715: INFO: Created: latency-svc-gvsjl Jul 20 13:47:13.367: INFO: Got endpoints: latency-svc-gvsjl [4.718330413s] Jul 20 13:47:13.559: INFO: Created: latency-svc-62cmd Jul 20 13:47:13.655: INFO: Got endpoints: latency-svc-62cmd [4.813905402s] Jul 20 13:47:14.057: INFO: Created: latency-svc-glvrx Jul 20 13:47:14.120: INFO: Got endpoints: latency-svc-glvrx [4.841304652s] Jul 20 13:47:14.290: INFO: Created: latency-svc-cxkfh Jul 20 13:47:14.335: INFO: Got endpoints: latency-svc-cxkfh [4.835990257s] Jul 20 13:47:15.236: INFO: Created: latency-svc-zpsjw Jul 20 13:47:15.679: INFO: Got endpoints: latency-svc-zpsjw [6.089891407s] Jul 20 13:47:16.216: INFO: Created: latency-svc-r6qv4 Jul 20 13:47:16.557: INFO: Got endpoints: latency-svc-r6qv4 [6.85510729s] Jul 20 13:47:16.558: INFO: Created: latency-svc-ndwgs Jul 20 13:47:16.783: INFO: Got endpoints: latency-svc-ndwgs [7.011636797s] Jul 20 13:47:17.214: INFO: Created: latency-svc-7tph2 Jul 20 13:47:17.325: INFO: Got endpoints: latency-svc-7tph2 [7.442401579s] Jul 20 13:47:17.400: INFO: Created: latency-svc-4j5jg Jul 20 13:47:17.524: INFO: Got endpoints: latency-svc-4j5jg [7.572389819s] Jul 20 13:47:17.533: INFO: Created: latency-svc-k45rt Jul 20 13:47:17.574: INFO: Got endpoints: latency-svc-k45rt [7.418476974s] Jul 20 13:47:17.721: INFO: Created: latency-svc-5hvjx Jul 20 13:47:17.731: INFO: Got endpoints: latency-svc-5hvjx [7.312203363s] Jul 20 13:47:17.949: INFO: Created: latency-svc-sdwff Jul 20 13:47:17.958: INFO: Got endpoints: latency-svc-sdwff [7.411594471s] Jul 20 13:47:18.010: INFO: Created: latency-svc-bxlqx Jul 20 13:47:18.141: INFO: Got endpoints: latency-svc-bxlqx [7.367821931s] Jul 20 13:47:18.149: INFO: Created: latency-svc-4fxz6 Jul 20 13:47:18.185: INFO: Got endpoints: latency-svc-4fxz6 [7.298556102s] Jul 20 13:47:18.240: INFO: Created: latency-svc-2mtsr Jul 20 13:47:18.350: INFO: Got endpoints: latency-svc-2mtsr [7.157127398s] Jul 20 13:47:18.391: INFO: Created: latency-svc-j7l47 Jul 20 13:47:18.398: INFO: Got endpoints: latency-svc-j7l47 [5.031133501s] Jul 20 13:47:18.493: INFO: Created: latency-svc-vb2xs Jul 20 13:47:18.501: INFO: Got endpoints: latency-svc-vb2xs [4.845916308s] Jul 20 13:47:18.591: INFO: Created: latency-svc-l7bzx Jul 20 13:47:18.716: INFO: Got endpoints: latency-svc-l7bzx [4.596251119s] Jul 20 13:47:18.721: INFO: Created: latency-svc-b4tnl Jul 20 13:47:18.741: INFO: Got endpoints: latency-svc-b4tnl [4.406119527s] Jul 20 13:47:18.902: INFO: Created: latency-svc-dfxcw Jul 20 13:47:18.951: INFO: Got endpoints: latency-svc-dfxcw [3.271723952s] Jul 20 13:47:19.104: INFO: Created: latency-svc-fb9f9 Jul 20 13:47:19.109: INFO: Got endpoints: latency-svc-fb9f9 [2.552463678s] Jul 20 13:47:19.328: INFO: Created: latency-svc-zvcwg Jul 20 13:47:19.347: INFO: Got endpoints: latency-svc-zvcwg [2.563163372s] Jul 20 13:47:19.408: INFO: Created: latency-svc-5f22b Jul 20 13:47:19.482: INFO: Got endpoints: latency-svc-5f22b [2.156775338s] Jul 20 13:47:19.559: INFO: Created: latency-svc-jfshh Jul 20 13:47:19.607: INFO: Got endpoints: latency-svc-jfshh [2.082629846s] Jul 20 13:47:19.657: INFO: Created: latency-svc-2nmlj Jul 20 13:47:19.704: INFO: Got endpoints: latency-svc-2nmlj [2.129751256s] Jul 20 13:47:20.165: INFO: Created: latency-svc-tzxld Jul 20 13:47:20.890: INFO: Got endpoints: latency-svc-tzxld [3.158583049s] Jul 20 13:47:21.153: INFO: Created: latency-svc-kr2gq Jul 20 13:47:21.245: INFO: Got endpoints: latency-svc-kr2gq [3.287224297s] Jul 20 13:47:22.028: INFO: Created: latency-svc-lkjm9 Jul 20 13:47:22.680: INFO: Got endpoints: latency-svc-lkjm9 [4.539300945s] Jul 20 13:47:22.908: INFO: Created: latency-svc-l44st Jul 20 13:47:23.439: INFO: Got endpoints: latency-svc-l44st [5.253903975s] Jul 20 13:47:23.938: INFO: Created: latency-svc-j5c28 Jul 20 13:47:24.625: INFO: Created: latency-svc-vd52q Jul 20 13:47:24.626: INFO: Got endpoints: latency-svc-j5c28 [6.275731003s] Jul 20 13:47:24.630: INFO: Got endpoints: latency-svc-vd52q [6.232142375s] Jul 20 13:47:24.832: INFO: Created: latency-svc-l8664 Jul 20 13:47:25.480: INFO: Got endpoints: latency-svc-l8664 [6.979043824s] Jul 20 13:47:26.036: INFO: Created: latency-svc-8pxcg Jul 20 13:47:26.278: INFO: Got endpoints: latency-svc-8pxcg [7.561692782s] Jul 20 13:47:26.873: INFO: Created: latency-svc-jl859 Jul 20 13:47:27.155: INFO: Got endpoints: latency-svc-jl859 [8.413982327s] Jul 20 13:47:27.329: INFO: Created: latency-svc-rpvzp Jul 20 13:47:27.374: INFO: Got endpoints: latency-svc-rpvzp [8.422938111s] Jul 20 13:47:27.918: INFO: Created: latency-svc-kb5h5 Jul 20 13:47:27.955: INFO: Got endpoints: latency-svc-kb5h5 [8.846148667s] Jul 20 13:47:28.092: INFO: Created: latency-svc-fsspw Jul 20 13:47:28.109: INFO: Got endpoints: latency-svc-fsspw [8.762222074s] Jul 20 13:47:28.324: INFO: Created: latency-svc-dfhvz Jul 20 13:47:28.334: INFO: Got endpoints: latency-svc-dfhvz [8.852359565s] Jul 20 13:47:28.783: INFO: Created: latency-svc-bn7nz Jul 20 13:47:29.076: INFO: Got endpoints: latency-svc-bn7nz [9.468723783s] Jul 20 13:47:29.165: INFO: Created: latency-svc-cklfs Jul 20 13:47:29.259: INFO: Got endpoints: latency-svc-cklfs [9.555625062s] Jul 20 13:47:29.316: INFO: Created: latency-svc-rhrwh Jul 20 13:47:29.351: INFO: Got endpoints: latency-svc-rhrwh [8.461339641s] Jul 20 13:47:29.451: INFO: Created: latency-svc-g7wf2 Jul 20 13:47:29.465: INFO: Got endpoints: latency-svc-g7wf2 [8.21980562s] Jul 20 13:47:29.515: INFO: Created: latency-svc-w8jp7 Jul 20 13:47:29.613: INFO: Got endpoints: latency-svc-w8jp7 [6.932507685s] Jul 20 13:47:29.615: INFO: Created: latency-svc-f2rh7 Jul 20 13:47:29.623: INFO: Got endpoints: latency-svc-f2rh7 [6.183691375s] Jul 20 13:47:29.671: INFO: Created: latency-svc-spjgl Jul 20 13:47:29.701: INFO: Got endpoints: latency-svc-spjgl [5.075000103s] Jul 20 13:47:29.811: INFO: Created: latency-svc-94hvl Jul 20 13:47:29.875: INFO: Got endpoints: latency-svc-94hvl [5.244894216s] Jul 20 13:47:30.345: INFO: Created: latency-svc-pvj4r Jul 20 13:47:30.373: INFO: Got endpoints: latency-svc-pvj4r [4.892997185s] Jul 20 13:47:30.507: INFO: Created: latency-svc-kpjrr Jul 20 13:47:30.541: INFO: Got endpoints: latency-svc-kpjrr [4.262765002s] Jul 20 13:47:30.877: INFO: Created: latency-svc-j5nwm Jul 20 13:47:30.895: INFO: Got endpoints: latency-svc-j5nwm [3.739476519s] Jul 20 13:47:30.945: INFO: Created: latency-svc-lsbmd Jul 20 13:47:31.026: INFO: Got endpoints: latency-svc-lsbmd [3.651694879s] Jul 20 13:47:31.417: INFO: Created: latency-svc-wqmkn Jul 20 13:47:31.451: INFO: Got endpoints: latency-svc-wqmkn [3.495367587s] Jul 20 13:47:31.613: INFO: Created: latency-svc-hjc52 Jul 20 13:47:31.657: INFO: Got endpoints: latency-svc-hjc52 [3.547639158s] Jul 20 13:47:31.702: INFO: Created: latency-svc-k8fkz Jul 20 13:47:31.762: INFO: Got endpoints: latency-svc-k8fkz [3.428164911s] Jul 20 13:47:31.973: INFO: Created: latency-svc-46g27 Jul 20 13:47:31.988: INFO: Got endpoints: latency-svc-46g27 [2.912373633s] Jul 20 13:47:32.040: INFO: Created: latency-svc-qslcn Jul 20 13:47:32.194: INFO: Got endpoints: latency-svc-qslcn [2.934677117s] Jul 20 13:47:32.197: INFO: Created: latency-svc-bgmtb Jul 20 13:47:32.233: INFO: Got endpoints: latency-svc-bgmtb [2.881400772s] Jul 20 13:47:32.386: INFO: Created: latency-svc-75hnz Jul 20 13:47:32.439: INFO: Got endpoints: latency-svc-75hnz [2.973830792s] Jul 20 13:47:32.571: INFO: Created: latency-svc-66fnc Jul 20 13:47:32.607: INFO: Got endpoints: latency-svc-66fnc [2.994344376s] Jul 20 13:47:32.667: INFO: Created: latency-svc-qmkmv Jul 20 13:47:32.745: INFO: Got endpoints: latency-svc-qmkmv [3.12243536s] Jul 20 13:47:32.799: INFO: Created: latency-svc-p2228 Jul 20 13:47:32.829: INFO: Got endpoints: latency-svc-p2228 [3.127960143s] Jul 20 13:47:32.908: INFO: Created: latency-svc-2pc64 Jul 20 13:47:32.925: INFO: Got endpoints: latency-svc-2pc64 [3.049473632s] Jul 20 13:47:33.069: INFO: Created: latency-svc-4h8ms Jul 20 13:47:33.119: INFO: Got endpoints: latency-svc-4h8ms [2.745659638s] Jul 20 13:47:33.167: INFO: Created: latency-svc-5xszb Jul 20 13:47:33.283: INFO: Got endpoints: latency-svc-5xszb [2.74223236s] Jul 20 13:47:33.285: INFO: Created: latency-svc-g8hz6 Jul 20 13:47:33.316: INFO: Got endpoints: latency-svc-g8hz6 [2.421018441s] Jul 20 13:47:33.365: INFO: Created: latency-svc-59nz7 Jul 20 13:47:33.433: INFO: Got endpoints: latency-svc-59nz7 [2.407139261s] Jul 20 13:47:33.456: INFO: Created: latency-svc-bwrtc Jul 20 13:47:33.472: INFO: Got endpoints: latency-svc-bwrtc [2.021567369s] Jul 20 13:47:33.687: INFO: Created: latency-svc-f2xgr Jul 20 13:47:33.733: INFO: Got endpoints: latency-svc-f2xgr [2.076668978s] Jul 20 13:47:33.878: INFO: Created: latency-svc-qmwbb Jul 20 13:47:33.916: INFO: Got endpoints: latency-svc-qmwbb [2.153984474s] Jul 20 13:47:33.938: INFO: Created: latency-svc-9frml Jul 20 13:47:33.964: INFO: Got endpoints: latency-svc-9frml [1.97626568s] Jul 20 13:47:34.041: INFO: Created: latency-svc-n4stt Jul 20 13:47:34.054: INFO: Got endpoints: latency-svc-n4stt [1.860216639s] Jul 20 13:47:34.267: INFO: Created: latency-svc-jnvl4 Jul 20 13:47:34.287: INFO: Got endpoints: latency-svc-jnvl4 [2.054719075s] Jul 20 13:47:34.330: INFO: Created: latency-svc-lw4cq Jul 20 13:47:34.359: INFO: Got endpoints: latency-svc-lw4cq [1.920081699s] Jul 20 13:47:34.477: INFO: Created: latency-svc-wgnqn Jul 20 13:47:34.534: INFO: Got endpoints: latency-svc-wgnqn [1.926924857s] Jul 20 13:47:34.625: INFO: Created: latency-svc-xtxx7 Jul 20 13:47:34.703: INFO: Created: latency-svc-cxhcp Jul 20 13:47:34.703: INFO: Got endpoints: latency-svc-xtxx7 [1.958150011s] Jul 20 13:47:34.769: INFO: Got endpoints: latency-svc-cxhcp [1.940066564s] Jul 20 13:47:34.889: INFO: Created: latency-svc-cn6x7 Jul 20 13:47:34.917: INFO: Got endpoints: latency-svc-cn6x7 [1.991528986s] Jul 20 13:47:34.949: INFO: Created: latency-svc-p7gzt Jul 20 13:47:35.020: INFO: Got endpoints: latency-svc-p7gzt [1.901487683s] Jul 20 13:47:35.100: INFO: Created: latency-svc-2vwv4 Jul 20 13:47:35.170: INFO: Got endpoints: latency-svc-2vwv4 [1.886717907s] Jul 20 13:47:35.185: INFO: Created: latency-svc-zt4pf Jul 20 13:47:35.226: INFO: Got endpoints: latency-svc-zt4pf [1.910589764s] Jul 20 13:47:35.262: INFO: Created: latency-svc-4pzwh Jul 20 13:47:35.313: INFO: Got endpoints: latency-svc-4pzwh [1.880395983s] Jul 20 13:47:35.329: INFO: Created: latency-svc-fqnfp Jul 20 13:47:35.337: INFO: Got endpoints: latency-svc-fqnfp [1.864401983s] Jul 20 13:47:35.475: INFO: Created: latency-svc-nx847 Jul 20 13:47:35.529: INFO: Got endpoints: latency-svc-nx847 [1.795992337s] Jul 20 13:47:35.570: INFO: Created: latency-svc-6vwbd Jul 20 13:47:35.631: INFO: Got endpoints: latency-svc-6vwbd [1.714268428s] Jul 20 13:47:35.685: INFO: Created: latency-svc-999gn Jul 20 13:47:35.710: INFO: Got endpoints: latency-svc-999gn [1.74516456s] Jul 20 13:47:35.842: INFO: Created: latency-svc-kkpkg Jul 20 13:47:35.885: INFO: Got endpoints: latency-svc-kkpkg [1.830798578s] Jul 20 13:47:35.954: INFO: Created: latency-svc-9scbj Jul 20 13:47:36.004: INFO: Got endpoints: latency-svc-9scbj [1.716715956s] Jul 20 13:47:36.004: INFO: Created: latency-svc-wrwt9 Jul 20 13:47:36.016: INFO: Got endpoints: latency-svc-wrwt9 [1.656386945s] Jul 20 13:47:36.046: INFO: Created: latency-svc-c2dww Jul 20 13:47:36.170: INFO: Got endpoints: latency-svc-c2dww [1.635517581s] Jul 20 13:47:36.171: INFO: Created: latency-svc-dn2vs Jul 20 13:47:36.190: INFO: Got endpoints: latency-svc-dn2vs [1.487040669s] Jul 20 13:47:36.240: INFO: Created: latency-svc-rnr7b Jul 20 13:47:36.319: INFO: Got endpoints: latency-svc-rnr7b [1.550657611s] Jul 20 13:47:36.343: INFO: Created: latency-svc-hxxsz Jul 20 13:47:36.352: INFO: Got endpoints: latency-svc-hxxsz [1.435427535s] Jul 20 13:47:36.378: INFO: Created: latency-svc-q8m2l Jul 20 13:47:36.407: INFO: Got endpoints: latency-svc-q8m2l [1.387322597s] Jul 20 13:47:36.528: INFO: Created: latency-svc-jwkjl Jul 20 13:47:36.560: INFO: Got endpoints: latency-svc-jwkjl [1.389989738s] Jul 20 13:47:36.751: INFO: Created: latency-svc-24kzx Jul 20 13:47:36.755: INFO: Got endpoints: latency-svc-24kzx [1.528942119s] Jul 20 13:47:36.925: INFO: Created: latency-svc-g2wtq Jul 20 13:47:36.962: INFO: Got endpoints: latency-svc-g2wtq [1.648826735s] Jul 20 13:47:36.963: INFO: Created: latency-svc-f6xl2 Jul 20 13:47:37.004: INFO: Got endpoints: latency-svc-f6xl2 [1.667341786s] Jul 20 13:47:37.116: INFO: Created: latency-svc-rdm45 Jul 20 13:47:37.119: INFO: Got endpoints: latency-svc-rdm45 [1.589264214s] Jul 20 13:47:37.254: INFO: Created: latency-svc-8q4q6 Jul 20 13:47:37.301: INFO: Got endpoints: latency-svc-8q4q6 [1.670122247s] Jul 20 13:47:37.438: INFO: Created: latency-svc-8bzpr Jul 20 13:47:37.449: INFO: Got endpoints: latency-svc-8bzpr [1.739907653s] Jul 20 13:47:37.486: INFO: Created: latency-svc-tfsls Jul 20 13:47:37.607: INFO: Got endpoints: latency-svc-tfsls [1.721512301s] Jul 20 13:47:37.625: INFO: Created: latency-svc-8dtxz Jul 20 13:47:37.642: INFO: Got endpoints: latency-svc-8dtxz [1.637625149s] Jul 20 13:47:37.667: INFO: Created: latency-svc-ptsmg Jul 20 13:47:37.698: INFO: Got endpoints: latency-svc-ptsmg [1.681814846s] Jul 20 13:47:37.787: INFO: Created: latency-svc-d89bw Jul 20 13:47:37.853: INFO: Got endpoints: latency-svc-d89bw [1.683545711s] Jul 20 13:47:37.854: INFO: Created: latency-svc-pph2c Jul 20 13:47:37.870: INFO: Got endpoints: latency-svc-pph2c [1.679308261s] Jul 20 13:47:37.978: INFO: Created: latency-svc-dp64s Jul 20 13:47:37.982: INFO: Got endpoints: latency-svc-dp64s [1.662670231s] Jul 20 13:47:38.046: INFO: Created: latency-svc-d7krq Jul 20 13:47:38.069: INFO: Got endpoints: latency-svc-d7krq [1.716649503s] Jul 20 13:47:38.128: INFO: Created: latency-svc-7cdgr Jul 20 13:47:38.156: INFO: Got endpoints: latency-svc-7cdgr [1.748171099s] Jul 20 13:47:38.318: INFO: Created: latency-svc-mvvpl Jul 20 13:47:38.345: INFO: Got endpoints: latency-svc-mvvpl [1.785002795s] Jul 20 13:47:38.378: INFO: Created: latency-svc-crh5r Jul 20 13:47:38.386: INFO: Got endpoints: latency-svc-crh5r [1.630974103s] Jul 20 13:47:38.504: INFO: Created: latency-svc-8dmg9 Jul 20 13:47:38.514: INFO: Got endpoints: latency-svc-8dmg9 [1.551315367s] Jul 20 13:47:38.565: INFO: Created: latency-svc-vmngc Jul 20 13:47:38.654: INFO: Got endpoints: latency-svc-vmngc [1.650032101s] Jul 20 13:47:38.673: INFO: Created: latency-svc-npmcc Jul 20 13:47:38.706: INFO: Got endpoints: latency-svc-npmcc [1.587769004s] Jul 20 13:47:38.822: INFO: Created: latency-svc-plwtk Jul 20 13:47:38.846: INFO: Got endpoints: latency-svc-plwtk [1.545280059s] Jul 20 13:47:38.903: INFO: Created: latency-svc-4tfpz Jul 20 13:47:39.008: INFO: Got endpoints: latency-svc-4tfpz [1.558349769s] Jul 20 13:47:39.065: INFO: Created: latency-svc-wpg76 Jul 20 13:47:39.092: INFO: Got endpoints: latency-svc-wpg76 [1.485495523s] Jul 20 13:47:39.260: INFO: Created: latency-svc-g2wd4 Jul 20 13:47:39.323: INFO: Got endpoints: latency-svc-g2wd4 [1.681173906s] Jul 20 13:47:39.433: INFO: Created: latency-svc-ntw9g Jul 20 13:47:39.436: INFO: Got endpoints: latency-svc-ntw9g [1.738036496s] Jul 20 13:47:39.600: INFO: Created: latency-svc-kbx9g Jul 20 13:47:39.650: INFO: Created: latency-svc-59q96 Jul 20 13:47:39.650: INFO: Got endpoints: latency-svc-kbx9g [1.796691636s] Jul 20 13:47:39.692: INFO: Got endpoints: latency-svc-59q96 [1.822079454s] Jul 20 13:47:39.788: INFO: Created: latency-svc-gch79 Jul 20 13:47:39.794: INFO: Got endpoints: latency-svc-gch79 [1.811786421s] Jul 20 13:47:39.930: INFO: Created: latency-svc-vkzlv Jul 20 13:47:39.971: INFO: Got endpoints: latency-svc-vkzlv [1.902128221s] Jul 20 13:47:40.024: INFO: Created: latency-svc-nzr8n Jul 20 13:47:40.069: INFO: Got endpoints: latency-svc-nzr8n [1.913335837s] Jul 20 13:47:40.126: INFO: Created: latency-svc-csbf2 Jul 20 13:47:40.132: INFO: Got endpoints: latency-svc-csbf2 [1.787209374s] Jul 20 13:47:40.302: INFO: Created: latency-svc-nvxp5 Jul 20 13:47:40.347: INFO: Got endpoints: latency-svc-nvxp5 [1.960594419s] Jul 20 13:47:40.385: INFO: Created: latency-svc-6rnbs Jul 20 13:47:40.487: INFO: Got endpoints: latency-svc-6rnbs [1.973695347s] Jul 20 13:47:40.494: INFO: Created: latency-svc-x8x8v Jul 20 13:47:40.530: INFO: Got endpoints: latency-svc-x8x8v [1.875503927s] Jul 20 13:47:40.619: INFO: Created: latency-svc-q96sd Jul 20 13:47:40.622: INFO: Got endpoints: latency-svc-q96sd [1.915421956s] Jul 20 13:47:40.704: INFO: Created: latency-svc-8dnsm Jul 20 13:47:40.775: INFO: Got endpoints: latency-svc-8dnsm [1.928533161s] Jul 20 13:47:40.808: INFO: Created: latency-svc-9d8hf Jul 20 13:47:40.824: INFO: Got endpoints: latency-svc-9d8hf [1.816068394s] Jul 20 13:47:40.868: INFO: Created: latency-svc-bsstc Jul 20 13:47:40.978: INFO: Got endpoints: latency-svc-bsstc [1.885656665s] Jul 20 13:47:41.062: INFO: Created: latency-svc-drn9s Jul 20 13:47:41.170: INFO: Got endpoints: latency-svc-drn9s [1.846364873s] Jul 20 13:47:41.199: INFO: Created: latency-svc-scn6w Jul 20 13:47:41.230: INFO: Got endpoints: latency-svc-scn6w [1.794593204s] Jul 20 13:47:41.362: INFO: Created: latency-svc-hc4ph Jul 20 13:47:41.453: INFO: Got endpoints: latency-svc-hc4ph [1.802601955s] Jul 20 13:47:41.453: INFO: Created: latency-svc-skgtl Jul 20 13:47:41.565: INFO: Got endpoints: latency-svc-skgtl [1.872800722s] Jul 20 13:47:41.578: INFO: Created: latency-svc-q4zz5 Jul 20 13:47:41.631: INFO: Got endpoints: latency-svc-q4zz5 [1.837030418s] Jul 20 13:47:41.750: INFO: Created: latency-svc-4lk4p Jul 20 13:47:41.754: INFO: Got endpoints: latency-svc-4lk4p [1.782553153s] Jul 20 13:47:41.826: INFO: Created: latency-svc-4l9th Jul 20 13:47:41.842: INFO: Got endpoints: latency-svc-4l9th [1.772530719s] Jul 20 13:47:41.899: INFO: Created: latency-svc-qtjwd Jul 20 13:47:41.931: INFO: Got endpoints: latency-svc-qtjwd [1.798852015s] Jul 20 13:47:41.971: INFO: Created: latency-svc-cztfv Jul 20 13:47:41.980: INFO: Got endpoints: latency-svc-cztfv [1.632661769s] Jul 20 13:47:42.116: INFO: Created: latency-svc-jkc9n Jul 20 13:47:42.182: INFO: Created: latency-svc-b477l Jul 20 13:47:42.182: INFO: Got endpoints: latency-svc-jkc9n [1.69491159s] Jul 20 13:47:42.283: INFO: Got endpoints: latency-svc-b477l [1.753507956s] Jul 20 13:47:42.369: INFO: Created: latency-svc-hsj7l Jul 20 13:47:42.475: INFO: Got endpoints: latency-svc-hsj7l [1.85304953s] Jul 20 13:47:42.661: INFO: Created: latency-svc-kgqkr Jul 20 13:47:42.665: INFO: Got endpoints: latency-svc-kgqkr [1.890206181s] Jul 20 13:47:42.760: INFO: Created: latency-svc-zksqs Jul 20 13:47:42.864: INFO: Got endpoints: latency-svc-zksqs [2.040359277s] Jul 20 13:47:42.867: INFO: Created: latency-svc-cx8p9 Jul 20 13:47:42.886: INFO: Got endpoints: latency-svc-cx8p9 [1.90799845s] Jul 20 13:47:42.932: INFO: Created: latency-svc-j8ffk Jul 20 13:47:42.954: INFO: Got endpoints: latency-svc-j8ffk [1.784347069s] Jul 20 13:47:43.044: INFO: Created: latency-svc-xcvt8 Jul 20 13:47:43.061: INFO: Got endpoints: latency-svc-xcvt8 [1.830202565s] Jul 20 13:47:43.121: INFO: Created: latency-svc-qm6pp Jul 20 13:47:43.133: INFO: Got endpoints: latency-svc-qm6pp [1.680360744s] Jul 20 13:47:43.230: INFO: Created: latency-svc-2h2kv Jul 20 13:47:43.285: INFO: Got endpoints: latency-svc-2h2kv [1.720384308s] Jul 20 13:47:43.286: INFO: Created: latency-svc-bf2cv Jul 20 13:47:43.379: INFO: Got endpoints: latency-svc-bf2cv [1.747952565s] Jul 20 13:47:43.412: INFO: Created: latency-svc-fm8dg Jul 20 13:47:43.464: INFO: Got endpoints: latency-svc-fm8dg [1.710330906s] Jul 20 13:47:43.677: INFO: Created: latency-svc-8p7hc Jul 20 13:47:43.805: INFO: Got endpoints: latency-svc-8p7hc [1.96306799s] Jul 20 13:47:43.852: INFO: Created: latency-svc-8fcw9 Jul 20 13:47:43.879: INFO: Got endpoints: latency-svc-8fcw9 [1.947358393s] Jul 20 13:47:44.022: INFO: Created: latency-svc-mh9wk Jul 20 13:47:44.029: INFO: Got endpoints: latency-svc-mh9wk [2.049603379s] Jul 20 13:47:44.101: INFO: Created: latency-svc-kcldh Jul 20 13:47:44.229: INFO: Got endpoints: latency-svc-kcldh [2.046778039s] Jul 20 13:47:44.232: INFO: Created: latency-svc-8bhpt Jul 20 13:47:44.274: INFO: Got endpoints: latency-svc-8bhpt [1.990602306s] Jul 20 13:47:44.749: INFO: Created: latency-svc-mctmc Jul 20 13:47:44.907: INFO: Got endpoints: latency-svc-mctmc [2.431790095s] Jul 20 13:47:45.146: INFO: Created: latency-svc-tz847 Jul 20 13:47:45.149: INFO: Got endpoints: latency-svc-tz847 [2.484018181s] Jul 20 13:47:45.379: INFO: Created: latency-svc-jl6t2 Jul 20 13:47:45.445: INFO: Got endpoints: latency-svc-jl6t2 [2.580034469s] Jul 20 13:47:45.626: INFO: Created: latency-svc-9n5mp Jul 20 13:47:45.636: INFO: Got endpoints: latency-svc-9n5mp [2.749955445s] Jul 20 13:47:45.688: INFO: Created: latency-svc-q927p Jul 20 13:47:46.627: INFO: Got endpoints: latency-svc-q927p [3.672398316s] Jul 20 13:47:47.024: INFO: Created: latency-svc-76srv Jul 20 13:47:47.081: INFO: Got endpoints: latency-svc-76srv [4.020011082s] Jul 20 13:47:47.674: INFO: Created: latency-svc-2dgz4 Jul 20 13:47:47.679: INFO: Got endpoints: latency-svc-2dgz4 [4.545952461s] Jul 20 13:47:47.948: INFO: Created: latency-svc-xxtbs Jul 20 13:47:47.986: INFO: Got endpoints: latency-svc-xxtbs [4.700632029s] Jul 20 13:47:48.122: INFO: Created: latency-svc-k9d6k Jul 20 13:47:48.126: INFO: Got endpoints: latency-svc-k9d6k [4.747204749s] Jul 20 13:47:48.365: INFO: Created: latency-svc-bzqnh Jul 20 13:47:48.406: INFO: Got endpoints: latency-svc-bzqnh [4.941675065s] Jul 20 13:47:48.436: INFO: Created: latency-svc-qldk8 Jul 20 13:47:48.447: INFO: Got endpoints: latency-svc-qldk8 [4.641987516s] Jul 20 13:47:48.511: INFO: Created: latency-svc-v4qkm Jul 20 13:47:48.550: INFO: Created: latency-svc-4n9nq Jul 20 13:47:48.551: INFO: Got endpoints: latency-svc-v4qkm [4.671859295s] Jul 20 13:47:48.580: INFO: Got endpoints: latency-svc-4n9nq [4.550417196s] Jul 20 13:47:48.605: INFO: Created: latency-svc-gbbjb Jul 20 13:47:48.702: INFO: Got endpoints: latency-svc-gbbjb [4.473122773s] Jul 20 13:47:48.707: INFO: Created: latency-svc-fdblk Jul 20 13:47:48.762: INFO: Got endpoints: latency-svc-fdblk [4.487341564s] Jul 20 13:47:48.853: INFO: Created: latency-svc-4vvw7 Jul 20 13:47:48.888: INFO: Got endpoints: latency-svc-4vvw7 [3.981334486s] Jul 20 13:47:49.050: INFO: Created: latency-svc-pwptk Jul 20 13:47:49.129: INFO: Got endpoints: latency-svc-pwptk [3.979437368s] Jul 20 13:47:49.129: INFO: Latencies: [277.367764ms 1.077940294s 1.387322597s 1.389989738s 1.429970913s 1.435427535s 1.485495523s 1.487040669s 1.528942119s 1.545280059s 1.550657611s 1.551315367s 1.558349769s 1.587769004s 1.589264214s 1.630974103s 1.632661769s 1.635517581s 1.637625149s 1.648826735s 1.650032101s 1.656386945s 1.662670231s 1.667341786s 1.670122247s 1.679308261s 1.680360744s 1.681173906s 1.681814846s 1.683545711s 1.69491159s 1.710330906s 1.714268428s 1.716649503s 1.716715956s 1.720384308s 1.721512301s 1.738036496s 1.739907653s 1.74516456s 1.747952565s 1.748171099s 1.753507956s 1.772530719s 1.782553153s 1.784347069s 1.785002795s 1.787209374s 1.794593204s 1.795992337s 1.796691636s 1.798852015s 1.802601955s 1.811786421s 1.816068394s 1.822079454s 1.830202565s 1.830798578s 1.837030418s 1.846364873s 1.85304953s 1.860216639s 1.864401983s 1.872800722s 1.875503927s 1.880395983s 1.885656665s 1.886717907s 1.890206181s 1.901487683s 1.902128221s 1.90799845s 1.910589764s 1.913335837s 1.915421956s 1.917187922s 1.920081699s 1.926924857s 1.928533161s 1.940066564s 1.947358393s 1.958150011s 1.960594419s 1.96306799s 1.973695347s 1.97626568s 1.990602306s 1.991528986s 2.021567369s 2.040359277s 2.046778039s 2.049603379s 2.054719075s 2.056627504s 2.076668978s 2.082629846s 2.129751256s 2.153984474s 2.156775338s 2.273673925s 2.407139261s 2.415392904s 2.421018441s 2.431790095s 2.467780095s 2.484018181s 2.552463678s 2.563163372s 2.580034469s 2.636682269s 2.717465612s 2.74223236s 2.745659638s 2.749955445s 2.764305374s 2.881400772s 2.912373633s 2.934677117s 2.973830792s 2.994344376s 3.049473632s 3.12243536s 3.127960143s 3.158583049s 3.23539031s 3.271723952s 3.287224297s 3.428164911s 3.489385369s 3.495367587s 3.547639158s 3.629846252s 3.651694879s 3.672398316s 3.691821908s 3.739476519s 3.777637021s 3.806718359s 3.861683243s 3.903051658s 3.979437368s 3.981334486s 4.020011082s 4.073804535s 4.223315902s 4.262765002s 4.406119527s 4.42718353s 4.463709331s 4.473122773s 4.487341564s 4.539300945s 4.545952461s 4.550417196s 4.596251119s 4.60091181s 4.641987516s 4.671859295s 4.700632029s 4.718330413s 4.747204749s 4.792631756s 4.813905402s 4.835990257s 4.841304652s 4.845916308s 4.892997185s 4.941675065s 5.031133501s 5.075000103s 5.173734438s 5.230446858s 5.244894216s 5.253903975s 6.089891407s 6.183691375s 6.232142375s 6.275731003s 6.85510729s 6.932507685s 6.979043824s 7.011636797s 7.157127398s 7.298556102s 7.312203363s 7.367821931s 7.411594471s 7.418476974s 7.442401579s 7.561692782s 7.572389819s 8.21980562s 8.413982327s 8.422938111s 8.461339641s 8.762222074s 8.846148667s 8.852359565s 9.468723783s 9.555625062s] Jul 20 13:47:49.129: INFO: 50 %ile: 2.407139261s Jul 20 13:47:49.129: INFO: 90 %ile: 6.979043824s Jul 20 13:47:49.129: INFO: 99 %ile: 9.468723783s Jul 20 13:47:49.129: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:47:49.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8805" for this suite. • [SLOW TEST:55.780 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":45,"skipped":823,"failed":0} [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:47:49.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-5284/configmap-test-33f0c55a-4f55-4e7c-833e-5f4645096aba STEP: Creating a pod to test consume configMaps Jul 20 13:47:49.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761" in namespace "configmap-5284" to be "Succeeded or Failed" Jul 20 13:47:49.473: INFO: Pod "pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268286ms Jul 20 13:47:51.477: INFO: Pod "pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006400383s Jul 20 13:47:53.593: INFO: Pod "pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122471631s Jul 20 13:47:55.667: INFO: Pod "pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761": Phase="Running", Reason="", readiness=true. Elapsed: 6.196771468s Jul 20 13:47:57.811: INFO: Pod "pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.340931648s STEP: Saw pod success Jul 20 13:47:57.811: INFO: Pod "pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761" satisfied condition "Succeeded or Failed" Jul 20 13:47:57.868: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761 container env-test: STEP: delete the pod Jul 20 13:47:58.062: INFO: Waiting for pod pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761 to disappear Jul 20 13:47:58.176: INFO: Pod pod-configmaps-e24e4639-df2c-4799-bc6c-026189e5a761 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:47:58.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5284" for this suite. • [SLOW TEST:9.025 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":823,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:47:58.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 20 13:47:58.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2" in namespace "projected-3992" to be "Succeeded or Failed" Jul 20 13:47:58.970: INFO: Pod "downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2": Phase="Pending", Reason="", readiness=false. Elapsed: 233.532519ms Jul 20 13:48:01.086: INFO: Pod "downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349412391s Jul 20 13:48:03.367: INFO: Pod "downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.630550016s Jul 20 13:48:06.021: INFO: Pod "downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.283840237s STEP: Saw pod success Jul 20 13:48:06.021: INFO: Pod "downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2" satisfied condition "Succeeded or Failed" Jul 20 13:48:06.302: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2 container client-container: STEP: delete the pod Jul 20 13:48:06.536: INFO: Waiting for pod downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2 to disappear Jul 20 13:48:06.631: INFO: Pod downwardapi-volume-b295b2fb-246e-4320-861a-23a50c1f90e2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:48:06.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3992" for this suite. • [SLOW TEST:8.581 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":842,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:48:06.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 13:48:09.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 13:48:11.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:48:13.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:48:15.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849689, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 13:48:19.112: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jul 20 13:48:19.345: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:48:20.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4017" for this suite. STEP: Destroying namespace "webhook-4017-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":48,"skipped":857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:48:21.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-6a932195-48d9-4ec2-a928-9632597bd659 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:48:22.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8423" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":49,"skipped":900,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:48:22.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 13:48:25.547: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 13:48:29.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849706, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:48:31.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849706, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:48:33.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849706, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:48:35.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849706, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 13:48:37.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849706, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730849705, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 13:48:40.625: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jul 20 13:48:41.625: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jul 20 13:48:42.625: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:48:44.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8781" for this suite. STEP: Destroying namespace "webhook-8781-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.926 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":50,"skipped":905,"failed":0} [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:48:45.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:48:57.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8847" for this suite. • [SLOW TEST:12.105 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:48:57.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-d7a80535-614b-4592-aa84-0c03d17ade24 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:49:10.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7008" for this suite. • [SLOW TEST:13.560 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":945,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:49:11.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:49:13.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5324" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":53,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:49:13.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:49:19.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7731" for this suite. STEP: Destroying namespace "nspatchtest-0dbbcee8-3e47-4d65-ab71-68aee25fbfcf-6094" for this suite. • [SLOW TEST:6.599 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":54,"skipped":989,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:49:20.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 20 13:49:20.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474" in namespace "projected-3829" to be "Succeeded or Failed" Jul 20 13:49:20.702: INFO: Pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474": Phase="Pending", Reason="", readiness=false. Elapsed: 198.020368ms Jul 20 13:49:22.824: INFO: Pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319224588s Jul 20 13:49:27.202: INFO: Pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697664134s Jul 20 13:49:29.471: INFO: Pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474": Phase="Pending", Reason="", readiness=false. Elapsed: 8.966530361s Jul 20 13:49:31.549: INFO: Pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474": Phase="Pending", Reason="", readiness=false. Elapsed: 11.04429275s Jul 20 13:49:33.614: INFO: Pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.10942598s STEP: Saw pod success Jul 20 13:49:33.614: INFO: Pod "downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474" satisfied condition "Succeeded or Failed" Jul 20 13:49:33.644: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474 container client-container: STEP: delete the pod Jul 20 13:49:34.731: INFO: Waiting for pod downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474 to disappear Jul 20 13:49:34.875: INFO: Pod downwardapi-volume-65652d10-d698-4343-ab2d-d1ef87567474 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:49:34.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3829" for this suite. • [SLOW TEST:14.782 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":991,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:49:34.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-c7d6d28a-8077-42da-8f15-3c13e029683a STEP: Creating configMap with name cm-test-opt-upd-7ca3c3b4-fd93-429d-9f58-70dc4ef559f9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c7d6d28a-8077-42da-8f15-3c13e029683a STEP: Updating configmap cm-test-opt-upd-7ca3c3b4-fd93-429d-9f58-70dc4ef559f9 STEP: Creating configMap with name cm-test-opt-create-1ccb9042-1c63-41da-b278-77225cd50cb7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:51:02.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-718" for this suite. • [SLOW TEST:87.683 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1002,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:51:02.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 13:51:18.598: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:51:19.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6127" for this suite. • [SLOW TEST:17.038 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":1009,"failed":0} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:51:19.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:51:35.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8985" for this suite. • [SLOW TEST:16.134 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":58,"skipped":1009,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:51:35.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 20 13:51:39.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a" in namespace "downward-api-8137" to be "Succeeded or Failed" Jul 20 13:51:40.369: INFO: Pod "downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.139435306s Jul 20 13:51:43.262: INFO: Pod "downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032159133s Jul 20 13:51:45.789: INFO: Pod "downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558925664s Jul 20 13:51:48.418: INFO: Pod "downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188491891s Jul 20 13:51:51.059: INFO: Pod "downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.829780489s STEP: Saw pod success Jul 20 13:51:51.060: INFO: Pod "downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a" satisfied condition "Succeeded or Failed" Jul 20 13:51:51.305: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a container client-container: STEP: delete the pod Jul 20 13:51:51.658: INFO: Waiting for pod downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a to disappear Jul 20 13:51:51.860: INFO: Pod downwardapi-volume-dc336a2f-e5b2-4e93-bdf4-9e1c67c02d6a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:51:51.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8137" for this suite. • [SLOW TEST:16.927 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1017,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:51:52.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 20 13:51:53.717: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3880 /api/v1/namespaces/watch-3880/configmaps/e2e-watch-test-watch-closed 0a1654f5-c9ce-4f14-a220-25d98c305e4c 2727759 0 2020-07-20 13:51:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 13:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:51:53.717: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3880 /api/v1/namespaces/watch-3880/configmaps/e2e-watch-test-watch-closed 0a1654f5-c9ce-4f14-a220-25d98c305e4c 2727764 0 2020-07-20 13:51:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 13:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 20 13:51:53.996: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3880 /api/v1/namespaces/watch-3880/configmaps/e2e-watch-test-watch-closed 0a1654f5-c9ce-4f14-a220-25d98c305e4c 2727766 0 2020-07-20 13:51:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 13:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 13:51:53.996: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3880 /api/v1/namespaces/watch-3880/configmaps/e2e-watch-test-watch-closed 0a1654f5-c9ce-4f14-a220-25d98c305e4c 2727770 0 2020-07-20 13:51:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 13:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:51:53.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3880" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":60,"skipped":1032,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:51:54.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 20 13:51:55.633: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 13:51:56.496: INFO: Waiting for terminating namespaces to be deleted... Jul 20 13:51:56.681: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 20 13:51:57.169: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Jul 20 13:51:57.169: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 13:51:57.169: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Jul 20 13:51:57.169: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 13:51:57.169: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 20 13:51:57.182: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 20 13:51:57.182: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 13:51:57.182: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 20 13:51:57.182: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-dd536ffd-b193-4ead-b7be-f28fc40ac769 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-dd536ffd-b193-4ead-b7be-f28fc40ac769 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-dd536ffd-b193-4ead-b7be-f28fc40ac769 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:57:14.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1583" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:320.774 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":61,"skipped":1041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:57:15.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:57:16.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6518" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":62,"skipped":1078,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:57:16.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 20 13:57:16.603: INFO: Waiting up to 5m0s for pod "pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb" in namespace "emptydir-4645" to be "Succeeded or Failed" Jul 20 13:57:16.647: INFO: Pod "pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.167934ms Jul 20 13:57:18.653: INFO: Pod "pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05010287s Jul 20 13:57:20.887: INFO: Pod "pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284215834s Jul 20 13:57:22.905: INFO: Pod "pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb": Phase="Running", Reason="", readiness=true. Elapsed: 6.301624027s Jul 20 13:57:24.907: INFO: Pod "pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.303967171s STEP: Saw pod success Jul 20 13:57:24.907: INFO: Pod "pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb" satisfied condition "Succeeded or Failed" Jul 20 13:57:24.909: INFO: Trying to get logs from node kali-worker pod pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb container test-container: STEP: delete the pod Jul 20 13:57:25.123: INFO: Waiting for pod pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb to disappear Jul 20 13:57:25.163: INFO: Pod pod-0347f3a4-5fab-46bb-8149-4e55a0dc2beb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:57:25.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4645" for this suite. • [SLOW TEST:9.346 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1081,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:57:25.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 20 13:57:25.745: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 13:57:25.798: INFO: Waiting for terminating namespaces to be deleted... Jul 20 13:57:25.800: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 20 13:57:25.805: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Jul 20 13:57:25.805: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 13:57:25.805: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Jul 20 13:57:25.805: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 13:57:25.805: INFO: rally-a049518e-jvz6fpaa from c-rally-a049518e-bjhe3yj2 started at 2020-07-20 13:57:03 +0000 UTC (1 container statuses recorded) Jul 20 13:57:25.805: INFO: Container rally-a049518e-jvz6fpaa ready: true, restart count 0 Jul 20 13:57:25.805: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 20 13:57:25.820: INFO: pod4 from sched-pred-1583 started at 2020-07-20 13:52:06 +0000 UTC (1 container statuses recorded) Jul 20 13:57:25.820: INFO: Container pod4 ready: false, restart count 0 Jul 20 13:57:25.820: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 20 13:57:25.820: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 13:57:25.820: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 20 13:57:25.820: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16237a8e9dd17e7a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 13:57:26.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7714" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":64,"skipped":1097,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 13:57:26.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-992 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-992 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-992 Jul 20 13:57:27.414: INFO: Found 0 stateful pods, waiting for 1 Jul 20 13:57:37.522: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 20 13:57:37.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 13:57:49.014: INFO: stderr: "I0720 13:57:48.822119 156 log.go:172] (0xc00003a4d0) (0xc00055d5e0) Create stream\nI0720 13:57:48.822162 156 log.go:172] (0xc00003a4d0) (0xc00055d5e0) Stream added, broadcasting: 1\nI0720 13:57:48.826236 156 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0720 13:57:48.826273 156 log.go:172] (0xc00003a4d0) (0xc00055d680) Create stream\nI0720 13:57:48.826282 156 log.go:172] (0xc00003a4d0) (0xc00055d680) Stream added, broadcasting: 3\nI0720 13:57:48.827227 156 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0720 13:57:48.827256 156 log.go:172] (0xc00003a4d0) (0xc000c4a000) Create stream\nI0720 13:57:48.827271 156 log.go:172] (0xc00003a4d0) (0xc000c4a000) Stream added, broadcasting: 5\nI0720 13:57:48.827967 156 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0720 13:57:48.895259 156 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0720 13:57:48.895292 156 log.go:172] (0xc000c4a000) (5) Data frame handling\nI0720 13:57:48.895336 156 log.go:172] (0xc000c4a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 13:57:49.004464 156 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0720 13:57:49.004677 156 log.go:172] (0xc00055d680) (3) Data frame handling\nI0720 13:57:49.004844 156 log.go:172] (0xc00055d680) (3) Data frame sent\nI0720 13:57:49.004875 156 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0720 13:57:49.004904 156 log.go:172] (0xc00055d680) (3) Data frame handling\nI0720 13:57:49.005404 156 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0720 13:57:49.005485 156 log.go:172] (0xc000c4a000) (5) Data frame handling\nI0720 13:57:49.007900 156 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0720 13:57:49.007926 156 log.go:172] (0xc00055d5e0) (1) Data frame handling\nI0720 13:57:49.007941 156 log.go:172] (0xc00055d5e0) (1) Data frame sent\nI0720 13:57:49.007953 156 log.go:172] (0xc00003a4d0) (0xc00055d5e0) Stream removed, broadcasting: 1\nI0720 13:57:49.007961 156 log.go:172] (0xc00003a4d0) Go away received\nI0720 13:57:49.008495 156 log.go:172] (0xc00003a4d0) (0xc00055d5e0) Stream removed, broadcasting: 1\nI0720 13:57:49.008518 156 log.go:172] (0xc00003a4d0) (0xc00055d680) Stream removed, broadcasting: 3\nI0720 13:57:49.008530 156 log.go:172] (0xc00003a4d0) (0xc000c4a000) Stream removed, broadcasting: 5\n" Jul 20 13:57:49.014: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 13:57:49.014: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 13:57:49.017: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 20 13:57:59.022: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 13:57:59.022: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 13:57:59.062: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:57:59.062: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:57:59.062: INFO: Jul 20 13:57:59.062: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 20 13:58:00.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.969557211s Jul 20 13:58:01.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.956854909s Jul 20 13:58:02.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.747496254s Jul 20 13:58:03.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.658377589s Jul 20 13:58:04.769: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.267881008s Jul 20 13:58:05.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.262275868s Jul 20 13:58:06.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.190196009s Jul 20 13:58:08.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.185350408s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-992 Jul 20 13:58:09.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:58:09.819: INFO: stderr: "I0720 13:58:09.739912 184 log.go:172] (0xc000976f20) (0xc000962640) Create stream\nI0720 13:58:09.739960 184 log.go:172] (0xc000976f20) (0xc000962640) Stream added, broadcasting: 1\nI0720 13:58:09.742103 184 log.go:172] (0xc000976f20) Reply frame received for 1\nI0720 13:58:09.742137 184 log.go:172] (0xc000976f20) (0xc0009b2000) Create stream\nI0720 13:58:09.742145 184 log.go:172] (0xc000976f20) (0xc0009b2000) Stream added, broadcasting: 3\nI0720 13:58:09.742861 184 log.go:172] (0xc000976f20) Reply frame received for 3\nI0720 13:58:09.742884 184 log.go:172] (0xc000976f20) (0xc0009b20a0) Create stream\nI0720 13:58:09.742891 184 log.go:172] (0xc000976f20) (0xc0009b20a0) Stream added, broadcasting: 5\nI0720 13:58:09.743556 184 log.go:172] (0xc000976f20) Reply frame received for 5\nI0720 13:58:09.810018 184 log.go:172] (0xc000976f20) Data frame received for 3\nI0720 13:58:09.810051 184 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0720 13:58:09.810060 184 log.go:172] (0xc0009b2000) (3) Data frame sent\nI0720 13:58:09.810065 184 log.go:172] (0xc000976f20) Data frame received for 3\nI0720 13:58:09.810070 184 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0720 13:58:09.810095 184 log.go:172] (0xc000976f20) Data frame received for 5\nI0720 13:58:09.810103 184 log.go:172] (0xc0009b20a0) (5) Data frame handling\nI0720 13:58:09.810112 184 log.go:172] (0xc0009b20a0) (5) Data frame sent\nI0720 13:58:09.810120 184 log.go:172] (0xc000976f20) Data frame received for 5\nI0720 13:58:09.810132 184 log.go:172] (0xc0009b20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 13:58:09.811086 184 log.go:172] (0xc000976f20) Data frame received for 1\nI0720 13:58:09.811107 184 log.go:172] (0xc000962640) (1) Data frame handling\nI0720 13:58:09.811132 184 log.go:172] (0xc000962640) (1) Data frame sent\nI0720 13:58:09.811299 184 log.go:172] (0xc000976f20) (0xc000962640) Stream removed, broadcasting: 1\nI0720 13:58:09.811370 184 log.go:172] (0xc000976f20) Go away received\nI0720 13:58:09.811593 184 log.go:172] (0xc000976f20) (0xc000962640) Stream removed, broadcasting: 1\nI0720 13:58:09.811613 184 log.go:172] (0xc000976f20) (0xc0009b2000) Stream removed, broadcasting: 3\nI0720 13:58:09.811622 184 log.go:172] (0xc000976f20) (0xc0009b20a0) Stream removed, broadcasting: 5\n" Jul 20 13:58:09.820: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 13:58:09.820: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 13:58:09.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:58:10.004: INFO: stderr: "I0720 13:58:09.940440 201 log.go:172] (0xc00057e2c0) (0xc000930320) Create stream\nI0720 13:58:09.940484 201 log.go:172] (0xc00057e2c0) (0xc000930320) Stream added, broadcasting: 1\nI0720 13:58:09.942146 201 log.go:172] (0xc00057e2c0) Reply frame received for 1\nI0720 13:58:09.942170 201 log.go:172] (0xc00057e2c0) (0xc000296b40) Create stream\nI0720 13:58:09.942178 201 log.go:172] (0xc00057e2c0) (0xc000296b40) Stream added, broadcasting: 3\nI0720 13:58:09.942824 201 log.go:172] (0xc00057e2c0) Reply frame received for 3\nI0720 13:58:09.942841 201 log.go:172] (0xc00057e2c0) (0xc000930500) Create stream\nI0720 13:58:09.942847 201 log.go:172] (0xc00057e2c0) (0xc000930500) Stream added, broadcasting: 5\nI0720 13:58:09.943627 201 log.go:172] (0xc00057e2c0) Reply frame received for 5\nI0720 13:58:09.995822 201 log.go:172] (0xc00057e2c0) Data frame received for 3\nI0720 13:58:09.995852 201 log.go:172] (0xc000296b40) (3) Data frame handling\nI0720 13:58:09.995861 201 log.go:172] (0xc000296b40) (3) Data frame sent\nI0720 13:58:09.995869 201 log.go:172] (0xc00057e2c0) Data frame received for 3\nI0720 13:58:09.995902 201 log.go:172] (0xc00057e2c0) Data frame received for 5\nI0720 13:58:09.995938 201 log.go:172] (0xc000930500) (5) Data frame handling\nI0720 13:58:09.995952 201 log.go:172] (0xc000930500) (5) Data frame sent\nI0720 13:58:09.995962 201 log.go:172] (0xc00057e2c0) Data frame received for 5\nI0720 13:58:09.995973 201 log.go:172] (0xc000930500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0720 13:58:09.996011 201 log.go:172] (0xc000296b40) (3) Data frame handling\nI0720 13:58:10.000033 201 log.go:172] (0xc00057e2c0) Data frame received for 1\nI0720 13:58:10.000060 201 log.go:172] (0xc000930320) (1) Data frame handling\nI0720 13:58:10.000074 201 log.go:172] (0xc000930320) (1) Data frame sent\nI0720 13:58:10.000093 201 log.go:172] (0xc00057e2c0) (0xc000930320) Stream removed, broadcasting: 1\nI0720 13:58:10.000113 201 log.go:172] (0xc00057e2c0) Go away received\nI0720 13:58:10.000341 201 log.go:172] (0xc00057e2c0) (0xc000930320) Stream removed, broadcasting: 1\nI0720 13:58:10.000357 201 log.go:172] (0xc00057e2c0) (0xc000296b40) Stream removed, broadcasting: 3\nI0720 13:58:10.000363 201 log.go:172] (0xc00057e2c0) (0xc000930500) Stream removed, broadcasting: 5\n" Jul 20 13:58:10.004: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 13:58:10.004: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 13:58:10.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:58:10.210: INFO: stderr: "I0720 13:58:10.126946 222 log.go:172] (0xc000802420) (0xc0008b65a0) Create stream\nI0720 13:58:10.126994 222 log.go:172] (0xc000802420) (0xc0008b65a0) Stream added, broadcasting: 1\nI0720 13:58:10.130193 222 log.go:172] (0xc000802420) Reply frame received for 1\nI0720 13:58:10.130216 222 log.go:172] (0xc000802420) (0xc0005f9220) Create stream\nI0720 13:58:10.130223 222 log.go:172] (0xc000802420) (0xc0005f9220) Stream added, broadcasting: 3\nI0720 13:58:10.130794 222 log.go:172] (0xc000802420) Reply frame received for 3\nI0720 13:58:10.130821 222 log.go:172] (0xc000802420) (0xc0003eea00) Create stream\nI0720 13:58:10.130830 222 log.go:172] (0xc000802420) (0xc0003eea00) Stream added, broadcasting: 5\nI0720 13:58:10.131528 222 log.go:172] (0xc000802420) Reply frame received for 5\nI0720 13:58:10.203288 222 log.go:172] (0xc000802420) Data frame received for 3\nI0720 13:58:10.203322 222 log.go:172] (0xc0005f9220) (3) Data frame handling\nI0720 13:58:10.203346 222 log.go:172] (0xc0005f9220) (3) Data frame sent\nI0720 13:58:10.203361 222 log.go:172] (0xc000802420) Data frame received for 3\nI0720 13:58:10.203372 222 log.go:172] (0xc0005f9220) (3) Data frame handling\nI0720 13:58:10.203409 222 log.go:172] (0xc000802420) Data frame received for 5\nI0720 13:58:10.203425 222 log.go:172] (0xc0003eea00) (5) Data frame handling\nI0720 13:58:10.203435 222 log.go:172] (0xc0003eea00) (5) Data frame sent\nI0720 13:58:10.203441 222 log.go:172] (0xc000802420) Data frame received for 5\nI0720 13:58:10.203447 222 log.go:172] (0xc0003eea00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0720 13:58:10.204911 222 log.go:172] (0xc000802420) Data frame received for 1\nI0720 13:58:10.204938 222 log.go:172] (0xc0008b65a0) (1) Data frame handling\nI0720 13:58:10.204958 222 log.go:172] (0xc0008b65a0) (1) Data frame sent\nI0720 13:58:10.204974 222 log.go:172] (0xc000802420) (0xc0008b65a0) Stream removed, broadcasting: 1\nI0720 13:58:10.204992 222 log.go:172] (0xc000802420) Go away received\nI0720 13:58:10.205292 222 log.go:172] (0xc000802420) (0xc0008b65a0) Stream removed, broadcasting: 1\nI0720 13:58:10.205311 222 log.go:172] (0xc000802420) (0xc0005f9220) Stream removed, broadcasting: 3\nI0720 13:58:10.205319 222 log.go:172] (0xc000802420) (0xc0003eea00) Stream removed, broadcasting: 5\n" Jul 20 13:58:10.210: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 13:58:10.210: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 13:58:10.213: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jul 20 13:58:20.248: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 13:58:20.248: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 13:58:20.248: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 20 13:58:20.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 13:58:20.964: INFO: stderr: "I0720 13:58:20.541625 242 log.go:172] (0xc0008c3130) (0xc0009be460) Create stream\nI0720 13:58:20.541677 242 log.go:172] (0xc0008c3130) (0xc0009be460) Stream added, broadcasting: 1\nI0720 13:58:20.545978 242 log.go:172] (0xc0008c3130) Reply frame received for 1\nI0720 13:58:20.546043 242 log.go:172] (0xc0008c3130) (0xc0002415e0) Create stream\nI0720 13:58:20.546063 242 log.go:172] (0xc0008c3130) (0xc0002415e0) Stream added, broadcasting: 3\nI0720 13:58:20.547185 242 log.go:172] (0xc0008c3130) Reply frame received for 3\nI0720 13:58:20.547207 242 log.go:172] (0xc0008c3130) (0xc0009d4000) Create stream\nI0720 13:58:20.547213 242 log.go:172] (0xc0008c3130) (0xc0009d4000) Stream added, broadcasting: 5\nI0720 13:58:20.547963 242 log.go:172] (0xc0008c3130) Reply frame received for 5\nI0720 13:58:20.599434 242 log.go:172] (0xc0008c3130) Data frame received for 5\nI0720 13:58:20.599459 242 log.go:172] (0xc0009d4000) (5) Data frame handling\nI0720 13:58:20.599477 242 log.go:172] (0xc0009d4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 13:58:20.954839 242 log.go:172] (0xc0008c3130) Data frame received for 3\nI0720 13:58:20.954889 242 log.go:172] (0xc0002415e0) (3) Data frame handling\nI0720 13:58:20.954907 242 log.go:172] (0xc0002415e0) (3) Data frame sent\nI0720 13:58:20.954931 242 log.go:172] (0xc0008c3130) Data frame received for 3\nI0720 13:58:20.954951 242 log.go:172] (0xc0002415e0) (3) Data frame handling\nI0720 13:58:20.956067 242 log.go:172] (0xc0008c3130) Data frame received for 5\nI0720 13:58:20.956092 242 log.go:172] (0xc0009d4000) (5) Data frame handling\nI0720 13:58:20.958714 242 log.go:172] (0xc0008c3130) Data frame received for 1\nI0720 13:58:20.958757 242 log.go:172] (0xc0009be460) (1) Data frame handling\nI0720 13:58:20.958801 242 log.go:172] (0xc0009be460) (1) Data frame sent\nI0720 13:58:20.958943 242 log.go:172] (0xc0008c3130) (0xc0009be460) Stream removed, broadcasting: 1\nI0720 13:58:20.958980 242 log.go:172] (0xc0008c3130) Go away received\nI0720 13:58:20.959641 242 log.go:172] (0xc0008c3130) (0xc0009be460) Stream removed, broadcasting: 1\nI0720 13:58:20.959682 242 log.go:172] (0xc0008c3130) (0xc0002415e0) Stream removed, broadcasting: 3\nI0720 13:58:20.959706 242 log.go:172] (0xc0008c3130) (0xc0009d4000) Stream removed, broadcasting: 5\n" Jul 20 13:58:20.965: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 13:58:20.965: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 13:58:20.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 13:58:21.909: INFO: stderr: "I0720 13:58:21.463193 262 log.go:172] (0xc000916840) (0xc000663540) Create stream\nI0720 13:58:21.463248 262 log.go:172] (0xc000916840) (0xc000663540) Stream added, broadcasting: 1\nI0720 13:58:21.467682 262 log.go:172] (0xc000916840) Reply frame received for 1\nI0720 13:58:21.467726 262 log.go:172] (0xc000916840) (0xc000a02000) Create stream\nI0720 13:58:21.467735 262 log.go:172] (0xc000916840) (0xc000a02000) Stream added, broadcasting: 3\nI0720 13:58:21.468626 262 log.go:172] (0xc000916840) Reply frame received for 3\nI0720 13:58:21.468663 262 log.go:172] (0xc000916840) (0xc000a020a0) Create stream\nI0720 13:58:21.468676 262 log.go:172] (0xc000916840) (0xc000a020a0) Stream added, broadcasting: 5\nI0720 13:58:21.469641 262 log.go:172] (0xc000916840) Reply frame received for 5\nI0720 13:58:21.519522 262 log.go:172] (0xc000916840) Data frame received for 5\nI0720 13:58:21.519560 262 log.go:172] (0xc000a020a0) (5) Data frame handling\nI0720 13:58:21.519585 262 log.go:172] (0xc000a020a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 13:58:21.902961 262 log.go:172] (0xc000916840) Data frame received for 3\nI0720 13:58:21.902991 262 log.go:172] (0xc000a02000) (3) Data frame handling\nI0720 13:58:21.903004 262 log.go:172] (0xc000a02000) (3) Data frame sent\nI0720 13:58:21.903013 262 log.go:172] (0xc000916840) Data frame received for 3\nI0720 13:58:21.903021 262 log.go:172] (0xc000a02000) (3) Data frame handling\nI0720 13:58:21.903095 262 log.go:172] (0xc000916840) Data frame received for 5\nI0720 13:58:21.903136 262 log.go:172] (0xc000a020a0) (5) Data frame handling\nI0720 13:58:21.904495 262 log.go:172] (0xc000916840) Data frame received for 1\nI0720 13:58:21.904506 262 log.go:172] (0xc000663540) (1) Data frame handling\nI0720 13:58:21.904513 262 log.go:172] (0xc000663540) (1) Data frame sent\nI0720 13:58:21.904523 262 log.go:172] (0xc000916840) (0xc000663540) Stream removed, broadcasting: 1\nI0720 13:58:21.904574 262 log.go:172] (0xc000916840) Go away received\nI0720 13:58:21.904923 262 log.go:172] (0xc000916840) (0xc000663540) Stream removed, broadcasting: 1\nI0720 13:58:21.904937 262 log.go:172] (0xc000916840) (0xc000a02000) Stream removed, broadcasting: 3\nI0720 13:58:21.904943 262 log.go:172] (0xc000916840) (0xc000a020a0) Stream removed, broadcasting: 5\n" Jul 20 13:58:21.909: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 13:58:21.909: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 13:58:21.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 13:58:22.326: INFO: stderr: "I0720 13:58:22.034604 283 log.go:172] (0xc000b06a50) (0xc0006f3860) Create stream\nI0720 13:58:22.034724 283 log.go:172] (0xc000b06a50) (0xc0006f3860) Stream added, broadcasting: 1\nI0720 13:58:22.036980 283 log.go:172] (0xc000b06a50) Reply frame received for 1\nI0720 13:58:22.037032 283 log.go:172] (0xc000b06a50) (0xc000661680) Create stream\nI0720 13:58:22.037066 283 log.go:172] (0xc000b06a50) (0xc000661680) Stream added, broadcasting: 3\nI0720 13:58:22.037781 283 log.go:172] (0xc000b06a50) Reply frame received for 3\nI0720 13:58:22.037820 283 log.go:172] (0xc000b06a50) (0xc000508aa0) Create stream\nI0720 13:58:22.037838 283 log.go:172] (0xc000b06a50) (0xc000508aa0) Stream added, broadcasting: 5\nI0720 13:58:22.038659 283 log.go:172] (0xc000b06a50) Reply frame received for 5\nI0720 13:58:22.104981 283 log.go:172] (0xc000b06a50) Data frame received for 5\nI0720 13:58:22.105008 283 log.go:172] (0xc000508aa0) (5) Data frame handling\nI0720 13:58:22.105049 283 log.go:172] (0xc000508aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 13:58:22.317876 283 log.go:172] (0xc000b06a50) Data frame received for 3\nI0720 13:58:22.317933 283 log.go:172] (0xc000661680) (3) Data frame handling\nI0720 13:58:22.317974 283 log.go:172] (0xc000661680) (3) Data frame sent\nI0720 13:58:22.317995 283 log.go:172] (0xc000b06a50) Data frame received for 3\nI0720 13:58:22.318023 283 log.go:172] (0xc000661680) (3) Data frame handling\nI0720 13:58:22.318531 283 log.go:172] (0xc000b06a50) Data frame received for 5\nI0720 13:58:22.318573 283 log.go:172] (0xc000508aa0) (5) Data frame handling\nI0720 13:58:22.320219 283 log.go:172] (0xc000b06a50) Data frame received for 1\nI0720 13:58:22.320233 283 log.go:172] (0xc0006f3860) (1) Data frame handling\nI0720 13:58:22.320239 283 log.go:172] (0xc0006f3860) (1) Data frame sent\nI0720 13:58:22.320608 283 log.go:172] (0xc000b06a50) (0xc0006f3860) Stream removed, broadcasting: 1\nI0720 13:58:22.320791 283 log.go:172] (0xc000b06a50) Go away received\nI0720 13:58:22.321304 283 log.go:172] (0xc000b06a50) (0xc0006f3860) Stream removed, broadcasting: 1\nI0720 13:58:22.321414 283 log.go:172] (0xc000b06a50) (0xc000661680) Stream removed, broadcasting: 3\nI0720 13:58:22.321448 283 log.go:172] (0xc000b06a50) (0xc000508aa0) Stream removed, broadcasting: 5\n" Jul 20 13:58:22.326: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 13:58:22.326: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 13:58:22.326: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 13:58:22.457: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 20 13:58:32.464: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 13:58:32.464: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 20 13:58:32.464: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 20 13:58:32.488: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:32.488: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:32.488: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:32.488: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:32.488: INFO: Jul 20 13:58:32.488: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 13:58:33.889: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:33.889: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:33.889: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:33.889: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:33.889: INFO: Jul 20 13:58:33.889: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 13:58:35.122: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:35.122: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:35.122: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:35.122: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:35.122: INFO: Jul 20 13:58:35.122: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 13:58:36.364: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:36.364: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:36.364: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:36.364: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:36.364: INFO: Jul 20 13:58:36.364: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 13:58:37.956: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:37.956: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:37.956: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:37.956: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:37.956: INFO: Jul 20 13:58:37.956: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 13:58:39.009: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:39.009: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:39.009: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:39.009: INFO: Jul 20 13:58:39.009: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 13:58:40.075: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:40.075: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:40.075: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:40.075: INFO: Jul 20 13:58:40.075: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 13:58:41.080: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:41.080: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:41.080: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:41.080: INFO: Jul 20 13:58:41.080: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 13:58:42.085: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 13:58:42.085: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:27 +0000 UTC }] Jul 20 13:58:42.085: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:58:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 13:57:59 +0000 UTC }] Jul 20 13:58:42.085: INFO: Jul 20 13:58:42.085: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-992 Jul 20 13:58:43.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:58:43.217: INFO: rc: 1 Jul 20 13:58:43.217: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jul 20 13:58:53.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:58:53.987: INFO: rc: 1 Jul 20 13:58:53.987: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 13:59:03.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:59:04.133: INFO: rc: 1 Jul 20 13:59:04.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 13:59:14.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:59:14.438: INFO: rc: 1 Jul 20 13:59:14.438: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 13:59:24.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:59:24.530: INFO: rc: 1 Jul 20 13:59:24.530: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 13:59:34.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:59:34.626: INFO: rc: 1 Jul 20 13:59:34.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 13:59:44.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:59:44.886: INFO: rc: 1 Jul 20 13:59:44.886: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 13:59:54.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 13:59:55.121: INFO: rc: 1 Jul 20 13:59:55.121: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:00:05.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:00:05.222: INFO: rc: 1 Jul 20 14:00:05.222: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:00:15.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:00:15.317: INFO: rc: 1 Jul 20 14:00:15.317: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:00:25.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:00:25.429: INFO: rc: 1 Jul 20 14:00:25.429: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:00:35.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:00:35.540: INFO: rc: 1 Jul 20 14:00:35.540: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:00:45.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:00:45.652: INFO: rc: 1 Jul 20 14:00:45.652: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:00:55.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:00:55.751: INFO: rc: 1 Jul 20 14:00:55.751: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:01:05.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:01:05.873: INFO: rc: 1 Jul 20 14:01:05.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:01:15.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:01:15.965: INFO: rc: 1 Jul 20 14:01:15.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:01:25.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:01:26.073: INFO: rc: 1 Jul 20 14:01:26.073: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:01:36.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:01:36.169: INFO: rc: 1 Jul 20 14:01:36.169: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:01:46.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:01:46.299: INFO: rc: 1 Jul 20 14:01:46.299: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:01:56.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:01:56.395: INFO: rc: 1 Jul 20 14:01:56.395: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:02:06.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:02:06.495: INFO: rc: 1 Jul 20 14:02:06.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:02:16.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:02:17.237: INFO: rc: 1 Jul 20 14:02:17.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:02:27.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:02:27.339: INFO: rc: 1 Jul 20 14:02:27.339: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:02:37.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:02:37.558: INFO: rc: 1 Jul 20 14:02:37.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:02:47.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:02:47.648: INFO: rc: 1 Jul 20 14:02:47.648: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:02:57.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:02:57.741: INFO: rc: 1 Jul 20 14:02:57.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:03:07.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:03:07.832: INFO: rc: 1 Jul 20 14:03:07.832: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:03:17.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:03:17.977: INFO: rc: 1 Jul 20 14:03:17.977: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:03:27.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:03:28.364: INFO: rc: 1 Jul 20 14:03:28.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:03:38.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:03:38.761: INFO: rc: 1 Jul 20 14:03:38.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 14:03:48.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 14:03:48.851: INFO: rc: 1 Jul 20 14:03:48.851: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jul 20 14:03:48.851: INFO: Scaling statefulset ss to 0 Jul 20 14:03:48.859: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jul 20 14:03:48.861: INFO: Deleting all statefulset in ns statefulset-992 Jul 20 14:03:48.863: INFO: Scaling statefulset ss to 0 Jul 20 14:03:48.872: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 14:03:48.874: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:03:48.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-992" for this suite. • [SLOW TEST:381.982 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":65,"skipped":1098,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:03:48.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 20 14:04:12.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 14:04:12.910: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 14:04:14.910: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 14:04:14.914: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 14:04:16.910: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 14:04:16.939: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 14:04:18.910: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 14:04:19.021: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 14:04:20.910: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 14:04:20.914: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 14:04:22.910: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 14:04:23.149: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 14:04:24.910: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 14:04:24.957: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:04:24.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6255" for this suite. • [SLOW TEST:36.005 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1099,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:04:24.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Jul 20 14:04:26.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-842' Jul 20 14:04:26.670: INFO: stderr: "" Jul 20 14:04:26.670: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 20 14:04:28.502: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:28.502: INFO: Found 0 / 1 Jul 20 14:04:28.951: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:28.951: INFO: Found 0 / 1 Jul 20 14:04:30.203: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:30.203: INFO: Found 0 / 1 Jul 20 14:04:30.753: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:30.753: INFO: Found 0 / 1 Jul 20 14:04:31.975: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:31.975: INFO: Found 0 / 1 Jul 20 14:04:32.766: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:32.766: INFO: Found 0 / 1 Jul 20 14:04:33.904: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:33.904: INFO: Found 0 / 1 Jul 20 14:04:34.772: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:34.772: INFO: Found 1 / 1 Jul 20 14:04:34.772: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 20 14:04:34.775: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:34.775: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 20 14:04:34.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config patch pod agnhost-master-8wxcq --namespace=kubectl-842 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 20 14:04:35.416: INFO: stderr: "" Jul 20 14:04:35.416: INFO: stdout: "pod/agnhost-master-8wxcq patched\n" STEP: checking annotations Jul 20 14:04:35.492: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 14:04:35.492: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:04:35.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-842" for this suite. • [SLOW TEST:10.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":67,"skipped":1114,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:04:35.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 20 14:04:36.163: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736" in namespace "downward-api-1197" to be "Succeeded or Failed" Jul 20 14:04:36.200: INFO: Pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736": Phase="Pending", Reason="", readiness=false. Elapsed: 36.213756ms Jul 20 14:04:38.203: INFO: Pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040003416s Jul 20 14:04:40.520: INFO: Pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356164782s Jul 20 14:04:42.562: INFO: Pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398799894s Jul 20 14:04:45.155: INFO: Pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736": Phase="Pending", Reason="", readiness=false. Elapsed: 8.991424018s Jul 20 14:04:47.226: INFO: Pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.062855752s STEP: Saw pod success Jul 20 14:04:47.226: INFO: Pod "downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736" satisfied condition "Succeeded or Failed" Jul 20 14:04:47.230: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736 container client-container: STEP: delete the pod Jul 20 14:04:47.440: INFO: Waiting for pod downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736 to disappear Jul 20 14:04:47.929: INFO: Pod downwardapi-volume-afd34fcd-6bf8-41c3-8864-dcfddb68c736 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:04:47.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1197" for this suite. • [SLOW TEST:13.054 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:04:48.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Jul 20 14:04:55.927: INFO: Pod pod-hostip-97555386-15d8-401c-9275-394decd2a3e5 has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:04:55.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7351" for this suite. • [SLOW TEST:7.429 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1168,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:04:55.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:05:15.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4113" for this suite. • [SLOW TEST:19.480 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":70,"skipped":1182,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:05:15.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:05:29.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8996" for this suite. • [SLOW TEST:13.776 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":71,"skipped":1192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:05:29.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 14:05:38.900: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:05:39.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3545" for this suite. • [SLOW TEST:9.850 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1255,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:05:39.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995 Jul 20 14:05:39.362: INFO: Pod name my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995: Found 0 pods out of 1 Jul 20 14:05:44.593: INFO: Pod name my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995: Found 1 pods out of 1 Jul 20 14:05:44.594: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995" are running Jul 20 14:05:44.633: INFO: Pod "my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995-f6h7k" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:05:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:05:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:05:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:05:39 +0000 UTC Reason: Message:}]) Jul 20 14:05:44.633: INFO: Trying to dial the pod Jul 20 14:05:49.642: INFO: Controller my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995: Got expected result from replica 1 [my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995-f6h7k]: "my-hostname-basic-c6c3ecd8-4649-4211-b12e-871feb4a2995-f6h7k", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:05:49.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7286" for this suite. • [SLOW TEST:10.560 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":73,"skipped":1257,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:05:49.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 20 14:05:50.392: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:50.414: INFO: Number of nodes with available pods: 0 Jul 20 14:05:50.414: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:52.231: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:52.235: INFO: Number of nodes with available pods: 0 Jul 20 14:05:52.235: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:53.346: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:53.399: INFO: Number of nodes with available pods: 0 Jul 20 14:05:53.399: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:53.485: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:53.535: INFO: Number of nodes with available pods: 0 Jul 20 14:05:53.535: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:54.618: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:54.622: INFO: Number of nodes with available pods: 0 Jul 20 14:05:54.622: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:55.648: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:55.652: INFO: Number of nodes with available pods: 0 Jul 20 14:05:55.652: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:56.821: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:57.018: INFO: Number of nodes with available pods: 0 Jul 20 14:05:57.018: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:57.711: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:57.762: INFO: Number of nodes with available pods: 0 Jul 20 14:05:57.762: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:58.533: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:58.536: INFO: Number of nodes with available pods: 0 Jul 20 14:05:58.536: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:05:59.546: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:05:59.549: INFO: Number of nodes with available pods: 1 Jul 20 14:05:59.549: INFO: Node kali-worker2 is running more than one daemon pod Jul 20 14:06:00.594: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:00.597: INFO: Number of nodes with available pods: 2 Jul 20 14:06:00.597: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 20 14:06:01.139: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:01.414: INFO: Number of nodes with available pods: 1 Jul 20 14:06:01.414: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:02.617: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:02.620: INFO: Number of nodes with available pods: 1 Jul 20 14:06:02.621: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:03.455: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:03.457: INFO: Number of nodes with available pods: 1 Jul 20 14:06:03.457: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:04.911: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:06.259: INFO: Number of nodes with available pods: 1 Jul 20 14:06:06.259: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:06.475: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:06.478: INFO: Number of nodes with available pods: 1 Jul 20 14:06:06.478: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:07.567: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:07.570: INFO: Number of nodes with available pods: 1 Jul 20 14:06:07.570: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:08.654: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:08.688: INFO: Number of nodes with available pods: 1 Jul 20 14:06:08.688: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:09.461: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:09.464: INFO: Number of nodes with available pods: 1 Jul 20 14:06:09.464: INFO: Node kali-worker is running more than one daemon pod Jul 20 14:06:10.434: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 14:06:10.545: INFO: Number of nodes with available pods: 2 Jul 20 14:06:10.545: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2757, will wait for the garbage collector to delete the pods Jul 20 14:06:10.608: INFO: Deleting DaemonSet.extensions daemon-set took: 6.797739ms Jul 20 14:06:10.708: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.182243ms Jul 20 14:06:23.612: INFO: Number of nodes with available pods: 0 Jul 20 14:06:23.612: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 14:06:23.614: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2757/daemonsets","resourceVersion":"2731158"},"items":null} Jul 20 14:06:23.617: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2757/pods","resourceVersion":"2731158"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:06:23.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2757" for this suite. • [SLOW TEST:33.984 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":74,"skipped":1264,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:06:23.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-9b21495a-0a1e-48be-b535-f2e7a93a0865 STEP: Creating a pod to test consume configMaps Jul 20 14:06:24.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03" in namespace "configmap-592" to be "Succeeded or Failed" Jul 20 14:06:24.577: INFO: Pod "pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03": Phase="Pending", Reason="", readiness=false. Elapsed: 87.589798ms Jul 20 14:06:26.581: INFO: Pod "pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090726486s Jul 20 14:06:28.585: INFO: Pod "pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095635688s Jul 20 14:06:30.605: INFO: Pod "pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115034904s Jul 20 14:06:32.933: INFO: Pod "pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.443029635s STEP: Saw pod success Jul 20 14:06:32.933: INFO: Pod "pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03" satisfied condition "Succeeded or Failed" Jul 20 14:06:32.935: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03 container configmap-volume-test: STEP: delete the pod Jul 20 14:06:33.542: INFO: Waiting for pod pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03 to disappear Jul 20 14:06:34.006: INFO: Pod pod-configmaps-e7dcaa99-5d87-4130-bf14-d92e247e5f03 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:06:34.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-592" for this suite. • [SLOW TEST:10.380 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1264,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:06:34.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-1360 STEP: creating replication controller nodeport-test in namespace services-1360 I0720 14:06:35.243120 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1360, replica count: 2 I0720 14:06:38.297067 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 14:06:41.297300 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 14:06:44.297514 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 14:06:47.297796 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 14:06:50.298005 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 14:06:50.298: INFO: Creating new exec pod Jul 20 14:06:59.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-1360 execpodkczkv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jul 20 14:06:59.716: INFO: stderr: "I0720 14:06:59.547799 963 log.go:172] (0xc000596370) (0xc00040e000) Create stream\nI0720 14:06:59.547863 963 log.go:172] (0xc000596370) (0xc00040e000) Stream added, broadcasting: 1\nI0720 14:06:59.550349 963 log.go:172] (0xc000596370) Reply frame received for 1\nI0720 14:06:59.550407 963 log.go:172] (0xc000596370) (0xc00040e140) Create stream\nI0720 14:06:59.550423 963 log.go:172] (0xc000596370) (0xc00040e140) Stream added, broadcasting: 3\nI0720 14:06:59.551376 963 log.go:172] (0xc000596370) Reply frame received for 3\nI0720 14:06:59.551422 963 log.go:172] (0xc000596370) (0xc00040e1e0) Create stream\nI0720 14:06:59.551435 963 log.go:172] (0xc000596370) (0xc00040e1e0) Stream added, broadcasting: 5\nI0720 14:06:59.552438 963 log.go:172] (0xc000596370) Reply frame received for 5\nI0720 14:06:59.682698 963 log.go:172] (0xc000596370) Data frame received for 5\nI0720 14:06:59.682730 963 log.go:172] (0xc00040e1e0) (5) Data frame handling\nI0720 14:06:59.682750 963 log.go:172] (0xc00040e1e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0720 14:06:59.707454 963 log.go:172] (0xc000596370) Data frame received for 5\nI0720 14:06:59.707492 963 log.go:172] (0xc00040e1e0) (5) Data frame handling\nI0720 14:06:59.707525 963 log.go:172] (0xc00040e1e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0720 14:06:59.707551 963 log.go:172] (0xc000596370) Data frame received for 3\nI0720 14:06:59.707565 963 log.go:172] (0xc00040e140) (3) Data frame handling\nI0720 14:06:59.707824 963 log.go:172] (0xc000596370) Data frame received for 5\nI0720 14:06:59.707836 963 log.go:172] (0xc00040e1e0) (5) Data frame handling\nI0720 14:06:59.710228 963 log.go:172] (0xc000596370) Data frame received for 1\nI0720 14:06:59.710249 963 log.go:172] (0xc00040e000) (1) Data frame handling\nI0720 14:06:59.710260 963 log.go:172] (0xc00040e000) (1) Data frame sent\nI0720 14:06:59.710272 963 log.go:172] (0xc000596370) (0xc00040e000) Stream removed, broadcasting: 1\nI0720 14:06:59.710286 963 log.go:172] (0xc000596370) Go away received\nI0720 14:06:59.710702 963 log.go:172] (0xc000596370) (0xc00040e000) Stream removed, broadcasting: 1\nI0720 14:06:59.710725 963 log.go:172] (0xc000596370) (0xc00040e140) Stream removed, broadcasting: 3\nI0720 14:06:59.710737 963 log.go:172] (0xc000596370) (0xc00040e1e0) Stream removed, broadcasting: 5\n" Jul 20 14:06:59.716: INFO: stdout: "" Jul 20 14:06:59.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-1360 execpodkczkv -- /bin/sh -x -c nc -zv -t -w 2 10.105.77.191 80' Jul 20 14:07:00.009: INFO: stderr: "I0720 14:06:59.938447 986 log.go:172] (0xc000b16210) (0xc000aec1e0) Create stream\nI0720 14:06:59.938527 986 log.go:172] (0xc000b16210) (0xc000aec1e0) Stream added, broadcasting: 1\nI0720 14:06:59.942412 986 log.go:172] (0xc000b16210) Reply frame received for 1\nI0720 14:06:59.942462 986 log.go:172] (0xc000b16210) (0xc000aa60a0) Create stream\nI0720 14:06:59.942486 986 log.go:172] (0xc000b16210) (0xc000aa60a0) Stream added, broadcasting: 3\nI0720 14:06:59.943616 986 log.go:172] (0xc000b16210) Reply frame received for 3\nI0720 14:06:59.943983 986 log.go:172] (0xc000b16210) (0xc000aa40a0) Create stream\nI0720 14:06:59.944019 986 log.go:172] (0xc000b16210) (0xc000aa40a0) Stream added, broadcasting: 5\nI0720 14:06:59.945078 986 log.go:172] (0xc000b16210) Reply frame received for 5\nI0720 14:07:00.000613 986 log.go:172] (0xc000b16210) Data frame received for 3\nI0720 14:07:00.000650 986 log.go:172] (0xc000aa60a0) (3) Data frame handling\nI0720 14:07:00.000704 986 log.go:172] (0xc000b16210) Data frame received for 5\nI0720 14:07:00.000872 986 log.go:172] (0xc000aa40a0) (5) Data frame handling\nI0720 14:07:00.000909 986 log.go:172] (0xc000aa40a0) (5) Data frame sent\nI0720 14:07:00.000937 986 log.go:172] (0xc000b16210) Data frame received for 5\nI0720 14:07:00.000955 986 log.go:172] (0xc000aa40a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.77.191 80\nConnection to 10.105.77.191 80 port [tcp/http] succeeded!\nI0720 14:07:00.002655 986 log.go:172] (0xc000b16210) Data frame received for 1\nI0720 14:07:00.002668 986 log.go:172] (0xc000aec1e0) (1) Data frame handling\nI0720 14:07:00.002683 986 log.go:172] (0xc000aec1e0) (1) Data frame sent\nI0720 14:07:00.002693 986 log.go:172] (0xc000b16210) (0xc000aec1e0) Stream removed, broadcasting: 1\nI0720 14:07:00.002879 986 log.go:172] (0xc000b16210) Go away received\nI0720 14:07:00.003136 986 log.go:172] (0xc000b16210) (0xc000aec1e0) Stream removed, broadcasting: 1\nI0720 14:07:00.003183 986 log.go:172] (0xc000b16210) (0xc000aa60a0) Stream removed, broadcasting: 3\nI0720 14:07:00.003206 986 log.go:172] (0xc000b16210) (0xc000aa40a0) Stream removed, broadcasting: 5\n" Jul 20 14:07:00.009: INFO: stdout: "" Jul 20 14:07:00.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-1360 execpodkczkv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32391' Jul 20 14:07:00.181: INFO: stderr: "I0720 14:07:00.116959 1007 log.go:172] (0xc000ada9a0) (0xc000acc140) Create stream\nI0720 14:07:00.117000 1007 log.go:172] (0xc000ada9a0) (0xc000acc140) Stream added, broadcasting: 1\nI0720 14:07:00.119452 1007 log.go:172] (0xc000ada9a0) Reply frame received for 1\nI0720 14:07:00.119501 1007 log.go:172] (0xc000ada9a0) (0xc0006892c0) Create stream\nI0720 14:07:00.119521 1007 log.go:172] (0xc000ada9a0) (0xc0006892c0) Stream added, broadcasting: 3\nI0720 14:07:00.120357 1007 log.go:172] (0xc000ada9a0) Reply frame received for 3\nI0720 14:07:00.120393 1007 log.go:172] (0xc000ada9a0) (0xc00049e000) Create stream\nI0720 14:07:00.120405 1007 log.go:172] (0xc000ada9a0) (0xc00049e000) Stream added, broadcasting: 5\nI0720 14:07:00.121439 1007 log.go:172] (0xc000ada9a0) Reply frame received for 5\nI0720 14:07:00.174828 1007 log.go:172] (0xc000ada9a0) Data frame received for 5\nI0720 14:07:00.174864 1007 log.go:172] (0xc00049e000) (5) Data frame handling\nI0720 14:07:00.174897 1007 log.go:172] (0xc00049e000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 32391\nConnection to 172.18.0.13 32391 port [tcp/32391] succeeded!\nI0720 14:07:00.175071 1007 log.go:172] (0xc000ada9a0) Data frame received for 3\nI0720 14:07:00.175097 1007 log.go:172] (0xc0006892c0) (3) Data frame handling\nI0720 14:07:00.175123 1007 log.go:172] (0xc000ada9a0) Data frame received for 5\nI0720 14:07:00.175145 1007 log.go:172] (0xc00049e000) (5) Data frame handling\nI0720 14:07:00.176418 1007 log.go:172] (0xc000ada9a0) Data frame received for 1\nI0720 14:07:00.176503 1007 log.go:172] (0xc000acc140) (1) Data frame handling\nI0720 14:07:00.176531 1007 log.go:172] (0xc000acc140) (1) Data frame sent\nI0720 14:07:00.176545 1007 log.go:172] (0xc000ada9a0) (0xc000acc140) Stream removed, broadcasting: 1\nI0720 14:07:00.176562 1007 log.go:172] (0xc000ada9a0) Go away received\nI0720 14:07:00.177172 1007 log.go:172] (0xc000ada9a0) (0xc000acc140) Stream removed, broadcasting: 1\nI0720 14:07:00.177195 1007 log.go:172] (0xc000ada9a0) (0xc0006892c0) Stream removed, broadcasting: 3\nI0720 14:07:00.177217 1007 log.go:172] (0xc000ada9a0) (0xc00049e000) Stream removed, broadcasting: 5\n" Jul 20 14:07:00.182: INFO: stdout: "" Jul 20 14:07:00.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-1360 execpodkczkv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 32391' Jul 20 14:07:00.382: INFO: stderr: "I0720 14:07:00.314761 1027 log.go:172] (0xc00003a580) (0xc00040caa0) Create stream\nI0720 14:07:00.314851 1027 log.go:172] (0xc00003a580) (0xc00040caa0) Stream added, broadcasting: 1\nI0720 14:07:00.317946 1027 log.go:172] (0xc00003a580) Reply frame received for 1\nI0720 14:07:00.317998 1027 log.go:172] (0xc00003a580) (0xc000904000) Create stream\nI0720 14:07:00.318011 1027 log.go:172] (0xc00003a580) (0xc000904000) Stream added, broadcasting: 3\nI0720 14:07:00.319234 1027 log.go:172] (0xc00003a580) Reply frame received for 3\nI0720 14:07:00.319262 1027 log.go:172] (0xc00003a580) (0xc0009d0000) Create stream\nI0720 14:07:00.319271 1027 log.go:172] (0xc00003a580) (0xc0009d0000) Stream added, broadcasting: 5\nI0720 14:07:00.320386 1027 log.go:172] (0xc00003a580) Reply frame received for 5\nI0720 14:07:00.375752 1027 log.go:172] (0xc00003a580) Data frame received for 5\nI0720 14:07:00.375778 1027 log.go:172] (0xc0009d0000) (5) Data frame handling\nI0720 14:07:00.375794 1027 log.go:172] (0xc0009d0000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 32391\nI0720 14:07:00.376917 1027 log.go:172] (0xc00003a580) Data frame received for 5\nI0720 14:07:00.376944 1027 log.go:172] (0xc0009d0000) (5) Data frame handling\nI0720 14:07:00.376972 1027 log.go:172] (0xc00003a580) Data frame received for 3\nConnection to 172.18.0.15 32391 port [tcp/32391] succeeded!\nI0720 14:07:00.377021 1027 log.go:172] (0xc000904000) (3) Data frame handling\nI0720 14:07:00.377051 1027 log.go:172] (0xc0009d0000) (5) Data frame sent\nI0720 14:07:00.377069 1027 log.go:172] (0xc00003a580) Data frame received for 5\nI0720 14:07:00.377076 1027 log.go:172] (0xc0009d0000) (5) Data frame handling\nI0720 14:07:00.378483 1027 log.go:172] (0xc00003a580) Data frame received for 1\nI0720 14:07:00.378502 1027 log.go:172] (0xc00040caa0) (1) Data frame handling\nI0720 14:07:00.378518 1027 log.go:172] (0xc00040caa0) (1) Data frame sent\nI0720 14:07:00.378609 1027 log.go:172] (0xc00003a580) (0xc00040caa0) Stream removed, broadcasting: 1\nI0720 14:07:00.378630 1027 log.go:172] (0xc00003a580) Go away received\nI0720 14:07:00.378835 1027 log.go:172] (0xc00003a580) (0xc00040caa0) Stream removed, broadcasting: 1\nI0720 14:07:00.378845 1027 log.go:172] (0xc00003a580) (0xc000904000) Stream removed, broadcasting: 3\nI0720 14:07:00.378850 1027 log.go:172] (0xc00003a580) (0xc0009d0000) Stream removed, broadcasting: 5\n" Jul 20 14:07:00.383: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:07:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1360" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.376 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":76,"skipped":1271,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:07:00.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 20 14:07:02.840: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 20 14:07:04.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850824, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850822, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 14:07:07.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850824, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850822, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 14:07:09.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850824, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850822, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 14:07:11.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850824, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850822, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 14:07:12.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850823, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850824, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730850822, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 14:07:16.124: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 14:07:16.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:07:18.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9777" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:18.996 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":77,"skipped":1277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:07:19.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Jul 20 14:07:36.336: INFO: Successfully updated pod "annotationupdate1964db44-ebab-4661-879f-6b29be833bd3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:07:39.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7415" for this suite. • [SLOW TEST:19.758 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1300,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:07:39.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 20 14:07:40.619: INFO: Waiting up to 5m0s for pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b" in namespace "emptydir-5057" to be "Succeeded or Failed" Jul 20 14:07:40.903: INFO: Pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b": Phase="Pending", Reason="", readiness=false. Elapsed: 284.688179ms Jul 20 14:07:42.908: INFO: Pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289074559s Jul 20 14:07:44.941: INFO: Pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322190615s Jul 20 14:07:46.944: INFO: Pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.325592507s Jul 20 14:07:49.098: INFO: Pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b": Phase="Running", Reason="", readiness=true. Elapsed: 8.479394208s Jul 20 14:07:51.102: INFO: Pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.483073317s STEP: Saw pod success Jul 20 14:07:51.102: INFO: Pod "pod-98f767cc-3eaa-4703-a3eb-a5000350342b" satisfied condition "Succeeded or Failed" Jul 20 14:07:51.104: INFO: Trying to get logs from node kali-worker2 pod pod-98f767cc-3eaa-4703-a3eb-a5000350342b container test-container: STEP: delete the pod Jul 20 14:07:51.330: INFO: Waiting for pod pod-98f767cc-3eaa-4703-a3eb-a5000350342b to disappear Jul 20 14:07:51.333: INFO: Pod pod-98f767cc-3eaa-4703-a3eb-a5000350342b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:07:51.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5057" for this suite. • [SLOW TEST:12.196 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1308,"failed":0} SSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:07:51.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:08:23.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-313" for this suite. • [SLOW TEST:32.320 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":80,"skipped":1311,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:08:23.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 20 14:08:24.301: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 14:08:24.399: INFO: Waiting for terminating namespaces to be deleted... Jul 20 14:08:24.539: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 20 14:08:24.565: INFO: fail-once-local-vn9fb from job-313 started at 2020-07-20 14:08:08 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.565: INFO: Container c ready: false, restart count 1 Jul 20 14:08:24.565: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.565: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 14:08:24.565: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.565: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 14:08:24.565: INFO: fail-once-local-mkh9j from job-313 started at 2020-07-20 14:08:10 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.565: INFO: Container c ready: false, restart count 1 Jul 20 14:08:24.565: INFO: fail-once-local-dkxcz from job-313 started at 2020-07-20 14:07:51 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.565: INFO: Container c ready: false, restart count 1 Jul 20 14:08:24.565: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 20 14:08:24.571: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.571: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 14:08:24.571: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.571: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 14:08:24.571: INFO: fail-once-local-c2pkw from job-313 started at 2020-07-20 14:07:51 +0000 UTC (1 container statuses recorded) Jul 20 14:08:24.571: INFO: Container c ready: false, restart count 1 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Jul 20 14:08:25.110: INFO: Pod kindnet-njbgt requesting resource cpu=100m on Node kali-worker Jul 20 14:08:25.110: INFO: Pod kindnet-pk4xb requesting resource cpu=100m on Node kali-worker2 Jul 20 14:08:25.110: INFO: Pod kube-proxy-qwsfx requesting resource cpu=0m on Node kali-worker Jul 20 14:08:25.110: INFO: Pod kube-proxy-vk6jr requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jul 20 14:08:25.110: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Jul 20 14:08:25.116: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1e8b4b96-820b-47b3-8ebc-e27832f28d73.16237b2816e72ea3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7989/filler-pod-1e8b4b96-820b-47b3-8ebc-e27832f28d73 to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e8b4b96-820b-47b3-8ebc-e27832f28d73.16237b28e22c29f3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e8b4b96-820b-47b3-8ebc-e27832f28d73.16237b2980cccbe6], Reason = [Created], Message = [Created container filler-pod-1e8b4b96-820b-47b3-8ebc-e27832f28d73] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e8b4b96-820b-47b3-8ebc-e27832f28d73.16237b29aa5e10d9], Reason = [Started], Message = [Started container filler-pod-1e8b4b96-820b-47b3-8ebc-e27832f28d73] STEP: Considering event: Type = [Normal], Name = [filler-pod-b4ffb3f8-e4de-4440-aabd-dd70b1331722.16237b281ab78529], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7989/filler-pod-b4ffb3f8-e4de-4440-aabd-dd70b1331722 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-b4ffb3f8-e4de-4440-aabd-dd70b1331722.16237b292882094e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b4ffb3f8-e4de-4440-aabd-dd70b1331722.16237b29ba66833f], Reason = [Created], Message = [Created container filler-pod-b4ffb3f8-e4de-4440-aabd-dd70b1331722] STEP: Considering event: Type = [Normal], Name = [filler-pod-b4ffb3f8-e4de-4440-aabd-dd70b1331722.16237b29d2f05f49], Reason = [Started], Message = [Started container filler-pod-b4ffb3f8-e4de-4440-aabd-dd70b1331722] STEP: Considering event: Type = [Warning], Name = [additional-pod.16237b2a0f017565], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:08:35.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7989" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:12.640 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":81,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:08:36.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 14:08:42.414: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:08:42.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-279" for this suite. • [SLOW TEST:6.698 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1341,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:08:43.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-8e0e1b75-036e-442c-a74d-774f16e89c93 STEP: Creating a pod to test consume secrets Jul 20 14:08:45.163: INFO: Waiting up to 5m0s for pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61" in namespace "secrets-8203" to be "Succeeded or Failed" Jul 20 14:08:45.178: INFO: Pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 15.419894ms Jul 20 14:08:47.434: INFO: Pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271617368s Jul 20 14:08:50.127: INFO: Pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.964116234s Jul 20 14:08:52.362: INFO: Pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 7.19926398s Jul 20 14:08:54.584: INFO: Pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 9.420962554s Jul 20 14:08:56.597: INFO: Pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.434469148s STEP: Saw pod success Jul 20 14:08:56.597: INFO: Pod "pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61" satisfied condition "Succeeded or Failed" Jul 20 14:08:56.600: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61 container secret-volume-test: STEP: delete the pod Jul 20 14:08:57.010: INFO: Waiting for pod pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61 to disappear Jul 20 14:08:57.070: INFO: Pod pod-secrets-65aa0221-bb99-4da7-a0ef-a81050af0f61 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:08:57.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8203" for this suite. STEP: Destroying namespace "secret-namespace-1328" for this suite. • [SLOW TEST:14.344 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1353,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:08:57.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7237 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 20 14:08:58.073: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 20 14:08:58.912: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 14:09:01.020: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 14:09:02.990: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 14:09:05.133: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 14:09:07.314: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 14:09:08.996: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 14:09:11.182: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 14:09:12.990: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 14:09:14.916: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 14:09:17.356: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 14:09:19.170: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 20 14:09:19.176: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 14:09:21.420: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 14:09:23.382: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 14:09:25.424: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 20 14:09:33.734: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.138:8080/dial?request=hostname&protocol=udp&host=10.244.2.41&port=8081&tries=1'] Namespace:pod-network-test-7237 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 14:09:33.734: INFO: >>> kubeConfig: /root/.kube/config I0720 14:09:34.084890 7 log.go:172] (0xc002bc9b80) (0xc0020755e0) Create stream I0720 14:09:34.084919 7 log.go:172] (0xc002bc9b80) (0xc0020755e0) Stream added, broadcasting: 1 I0720 14:09:34.089849 7 log.go:172] (0xc002bc9b80) Reply frame received for 1 I0720 14:09:34.089896 7 log.go:172] (0xc002bc9b80) (0xc000e8ebe0) Create stream I0720 14:09:34.089909 7 log.go:172] (0xc002bc9b80) (0xc000e8ebe0) Stream added, broadcasting: 3 I0720 14:09:34.091105 7 log.go:172] (0xc002bc9b80) Reply frame received for 3 I0720 14:09:34.091138 7 log.go:172] (0xc002bc9b80) (0xc0015401e0) Create stream I0720 14:09:34.091150 7 log.go:172] (0xc002bc9b80) (0xc0015401e0) Stream added, broadcasting: 5 I0720 14:09:34.092131 7 log.go:172] (0xc002bc9b80) Reply frame received for 5 I0720 14:09:34.168328 7 log.go:172] (0xc002bc9b80) Data frame received for 3 I0720 14:09:34.168368 7 log.go:172] (0xc000e8ebe0) (3) Data frame handling I0720 14:09:34.168397 7 log.go:172] (0xc000e8ebe0) (3) Data frame sent I0720 14:09:34.169106 7 log.go:172] (0xc002bc9b80) Data frame received for 3 I0720 14:09:34.169143 7 log.go:172] (0xc000e8ebe0) (3) Data frame handling I0720 14:09:34.169386 7 log.go:172] (0xc002bc9b80) Data frame received for 5 I0720 14:09:34.169414 7 log.go:172] (0xc0015401e0) (5) Data frame handling I0720 14:09:34.171566 7 log.go:172] (0xc002bc9b80) Data frame received for 1 I0720 14:09:34.171597 7 log.go:172] (0xc0020755e0) (1) Data frame handling I0720 14:09:34.171618 7 log.go:172] (0xc0020755e0) (1) Data frame sent I0720 14:09:34.171638 7 log.go:172] (0xc002bc9b80) (0xc0020755e0) Stream removed, broadcasting: 1 I0720 14:09:34.171657 7 log.go:172] (0xc002bc9b80) Go away received I0720 14:09:34.171750 7 log.go:172] (0xc002bc9b80) (0xc0020755e0) Stream removed, broadcasting: 1 I0720 14:09:34.171775 7 log.go:172] (0xc002bc9b80) (0xc000e8ebe0) Stream removed, broadcasting: 3 I0720 14:09:34.171791 7 log.go:172] (0xc002bc9b80) (0xc0015401e0) Stream removed, broadcasting: 5 Jul 20 14:09:34.171: INFO: Waiting for responses: map[] Jul 20 14:09:34.331: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.138:8080/dial?request=hostname&protocol=udp&host=10.244.1.137&port=8081&tries=1'] Namespace:pod-network-test-7237 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 14:09:34.331: INFO: >>> kubeConfig: /root/.kube/config I0720 14:09:34.363838 7 log.go:172] (0xc000d32630) (0xc001540640) Create stream I0720 14:09:34.363872 7 log.go:172] (0xc000d32630) (0xc001540640) Stream added, broadcasting: 1 I0720 14:09:34.366093 7 log.go:172] (0xc000d32630) Reply frame received for 1 I0720 14:09:34.366123 7 log.go:172] (0xc000d32630) (0xc002075680) Create stream I0720 14:09:34.366132 7 log.go:172] (0xc000d32630) (0xc002075680) Stream added, broadcasting: 3 I0720 14:09:34.367051 7 log.go:172] (0xc000d32630) Reply frame received for 3 I0720 14:09:34.367101 7 log.go:172] (0xc000d32630) (0xc002bbe280) Create stream I0720 14:09:34.367130 7 log.go:172] (0xc000d32630) (0xc002bbe280) Stream added, broadcasting: 5 I0720 14:09:34.368077 7 log.go:172] (0xc000d32630) Reply frame received for 5 I0720 14:09:34.439647 7 log.go:172] (0xc000d32630) Data frame received for 3 I0720 14:09:34.439702 7 log.go:172] (0xc002075680) (3) Data frame handling I0720 14:09:34.439731 7 log.go:172] (0xc002075680) (3) Data frame sent I0720 14:09:34.439772 7 log.go:172] (0xc000d32630) Data frame received for 5 I0720 14:09:34.439784 7 log.go:172] (0xc002bbe280) (5) Data frame handling I0720 14:09:34.440379 7 log.go:172] (0xc000d32630) Data frame received for 3 I0720 14:09:34.440402 7 log.go:172] (0xc002075680) (3) Data frame handling I0720 14:09:34.443201 7 log.go:172] (0xc000d32630) Data frame received for 1 I0720 14:09:34.443230 7 log.go:172] (0xc001540640) (1) Data frame handling I0720 14:09:34.443248 7 log.go:172] (0xc001540640) (1) Data frame sent I0720 14:09:34.443319 7 log.go:172] (0xc000d32630) (0xc001540640) Stream removed, broadcasting: 1 I0720 14:09:34.443367 7 log.go:172] (0xc000d32630) Go away received I0720 14:09:34.443578 7 log.go:172] (0xc000d32630) (0xc001540640) Stream removed, broadcasting: 1 I0720 14:09:34.443618 7 log.go:172] (0xc000d32630) (0xc002075680) Stream removed, broadcasting: 3 I0720 14:09:34.443638 7 log.go:172] (0xc000d32630) (0xc002bbe280) Stream removed, broadcasting: 5 Jul 20 14:09:34.443: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:09:34.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7237" for this suite. • [SLOW TEST:37.106 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:09:34.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 20 14:09:35.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791" in namespace "downward-api-4821" to be "Succeeded or Failed" Jul 20 14:09:35.744: INFO: Pod "downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791": Phase="Pending", Reason="", readiness=false. Elapsed: 166.632202ms Jul 20 14:09:37.748: INFO: Pod "downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170821319s Jul 20 14:09:39.912: INFO: Pod "downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335315582s Jul 20 14:09:42.135: INFO: Pod "downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558384791s Jul 20 14:09:44.187: INFO: Pod "downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.610072703s STEP: Saw pod success Jul 20 14:09:44.187: INFO: Pod "downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791" satisfied condition "Succeeded or Failed" Jul 20 14:09:44.190: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791 container client-container: STEP: delete the pod Jul 20 14:09:45.042: INFO: Waiting for pod downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791 to disappear Jul 20 14:09:45.145: INFO: Pod downwardapi-volume-c5e79da4-d51b-4408-8e0f-9ce66ed53791 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:09:45.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4821" for this suite. • [SLOW TEST:10.880 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1397,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:09:45.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-76d8e70c-230d-4b8f-bd4d-ea1233a15191 STEP: Creating secret with name secret-projected-all-test-volume-fe873bbe-b060-4852-8224-fe79dd36115c STEP: Creating a pod to test Check all projections for projected volume plugin Jul 20 14:09:46.675: INFO: Waiting up to 5m0s for pod "projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8" in namespace "projected-2662" to be "Succeeded or Failed" Jul 20 14:09:46.882: INFO: Pod "projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 207.233881ms Jul 20 14:09:49.035: INFO: Pod "projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.359964317s Jul 20 14:09:51.103: INFO: Pod "projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427818258s Jul 20 14:09:53.152: INFO: Pod "projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476686471s Jul 20 14:09:55.300: INFO: Pod "projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.625374278s STEP: Saw pod success Jul 20 14:09:55.300: INFO: Pod "projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8" satisfied condition "Succeeded or Failed" Jul 20 14:09:55.305: INFO: Trying to get logs from node kali-worker pod projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8 container projected-all-volume-test: STEP: delete the pod Jul 20 14:09:55.829: INFO: Waiting for pod projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8 to disappear Jul 20 14:09:55.873: INFO: Pod projected-volume-04875731-c29e-4914-9ecf-caa8d1c6c8c8 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 20 14:09:55.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2662" for this suite. • [SLOW TEST:10.657 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1413,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 20 14:09:55.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 20 14:09:56.347: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul 20 14:09:56.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:10:13.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-809" for this suite.

• [SLOW TEST:16.956 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":88,"skipped":1473,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:10:13.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0720 14:10:56.865849       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 14:10:56.865: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:10:56.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4682" for this suite.

• [SLOW TEST:43.721 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":89,"skipped":1487,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:10:57.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:10:59.573: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f" in namespace "projected-3553" to be "Succeeded or Failed"
Jul 20 14:11:00.097: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f": Phase="Pending", Reason="", readiness=false. Elapsed: 524.290707ms
Jul 20 14:11:02.235: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662503657s
Jul 20 14:11:04.452: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.879060493s
Jul 20 14:11:06.588: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.015211776s
Jul 20 14:11:09.152: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.579260767s
Jul 20 14:11:11.331: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f": Phase="Running", Reason="", readiness=true. Elapsed: 11.758282522s
Jul 20 14:11:13.395: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.822190307s
STEP: Saw pod success
Jul 20 14:11:13.395: INFO: Pod "downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f" satisfied condition "Succeeded or Failed"
Jul 20 14:11:13.705: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f container client-container: 
STEP: delete the pod
Jul 20 14:11:14.243: INFO: Waiting for pod downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f to disappear
Jul 20 14:11:14.299: INFO: Pod downwardapi-volume-9ff5b62c-4a36-465d-acc5-49e6195f656f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:11:14.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3553" for this suite.

• [SLOW TEST:17.611 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1541,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:11:14.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:11:17.997: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 20 14:11:18.427: INFO: Number of nodes with available pods: 0
Jul 20 14:11:18.427: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 20 14:11:19.060: INFO: Number of nodes with available pods: 0
Jul 20 14:11:19.060: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:20.157: INFO: Number of nodes with available pods: 0
Jul 20 14:11:20.157: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:21.375: INFO: Number of nodes with available pods: 0
Jul 20 14:11:21.375: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:22.218: INFO: Number of nodes with available pods: 0
Jul 20 14:11:22.218: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:23.499: INFO: Number of nodes with available pods: 0
Jul 20 14:11:23.499: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:24.182: INFO: Number of nodes with available pods: 0
Jul 20 14:11:24.182: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:25.149: INFO: Number of nodes with available pods: 1
Jul 20 14:11:25.149: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 20 14:11:25.739: INFO: Number of nodes with available pods: 1
Jul 20 14:11:25.739: INFO: Number of running nodes: 0, number of available pods: 1
Jul 20 14:11:26.901: INFO: Number of nodes with available pods: 0
Jul 20 14:11:26.901: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 20 14:11:27.889: INFO: Number of nodes with available pods: 0
Jul 20 14:11:27.889: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:28.924: INFO: Number of nodes with available pods: 0
Jul 20 14:11:28.925: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:29.900: INFO: Number of nodes with available pods: 0
Jul 20 14:11:29.900: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:31.021: INFO: Number of nodes with available pods: 0
Jul 20 14:11:31.021: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:31.952: INFO: Number of nodes with available pods: 0
Jul 20 14:11:31.952: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:33.034: INFO: Number of nodes with available pods: 0
Jul 20 14:11:33.034: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:33.942: INFO: Number of nodes with available pods: 0
Jul 20 14:11:33.942: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:34.893: INFO: Number of nodes with available pods: 0
Jul 20 14:11:34.893: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:36.026: INFO: Number of nodes with available pods: 0
Jul 20 14:11:36.026: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:36.894: INFO: Number of nodes with available pods: 0
Jul 20 14:11:36.894: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:37.894: INFO: Number of nodes with available pods: 0
Jul 20 14:11:37.894: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:39.051: INFO: Number of nodes with available pods: 0
Jul 20 14:11:39.051: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:40.003: INFO: Number of nodes with available pods: 0
Jul 20 14:11:40.003: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:11:41.537: INFO: Number of nodes with available pods: 1
Jul 20 14:11:41.537: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5506, will wait for the garbage collector to delete the pods
Jul 20 14:11:42.574: INFO: Deleting DaemonSet.extensions daemon-set took: 43.190467ms
Jul 20 14:11:43.174: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.303925ms
Jul 20 14:11:46.966: INFO: Number of nodes with available pods: 0
Jul 20 14:11:46.966: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 14:11:46.969: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5506/daemonsets","resourceVersion":"2732787"},"items":null}

Jul 20 14:11:47.027: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5506/pods","resourceVersion":"2732789"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:11:47.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5506" for this suite.

• [SLOW TEST:33.368 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":91,"skipped":1543,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:11:48.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:11:52.153: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:11:55.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:11:57.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:11:59.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:12:01.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851112, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:12:04.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:12:04.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6796-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:12:06.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9186" for this suite.
STEP: Destroying namespace "webhook-9186-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.053 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":92,"skipped":1557,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:12:08.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jul 20 14:12:09.035: INFO: >>> kubeConfig: /root/.kube/config
Jul 20 14:12:12.509: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:12:26.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1254" for this suite.

• [SLOW TEST:18.048 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":93,"skipped":1557,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:12:26.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 20 14:12:38.701: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:12:39.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-929" for this suite.

• [SLOW TEST:12.897 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1580,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:12:39.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Jul 20 14:12:40.116: INFO: Waiting up to 5m0s for pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163" in namespace "var-expansion-4716" to be "Succeeded or Failed"
Jul 20 14:12:40.368: INFO: Pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163": Phase="Pending", Reason="", readiness=false. Elapsed: 251.622221ms
Jul 20 14:12:42.955: INFO: Pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838743337s
Jul 20 14:12:45.106: INFO: Pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163": Phase="Pending", Reason="", readiness=false. Elapsed: 4.989769957s
Jul 20 14:12:47.116: INFO: Pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163": Phase="Pending", Reason="", readiness=false. Elapsed: 6.999987766s
Jul 20 14:12:49.572: INFO: Pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163": Phase="Pending", Reason="", readiness=false. Elapsed: 9.45518459s
Jul 20 14:12:51.575: INFO: Pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.458953999s
STEP: Saw pod success
Jul 20 14:12:51.575: INFO: Pod "var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163" satisfied condition "Succeeded or Failed"
Jul 20 14:12:51.578: INFO: Trying to get logs from node kali-worker pod var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163 container dapi-container: 
STEP: delete the pod
Jul 20 14:12:52.029: INFO: Waiting for pod var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163 to disappear
Jul 20 14:12:52.550: INFO: Pod var-expansion-dd64d05c-4018-4073-a3e9-d9ae61492163 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:12:52.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4716" for this suite.

• [SLOW TEST:14.391 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1592,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:12:53.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-9154
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9154
STEP: Deleting pre-stop pod
Jul 20 14:13:18.291: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:13:18.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9154" for this suite.

• [SLOW TEST:25.443 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":96,"skipped":1595,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:13:18.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 20 14:13:33.556: INFO: Successfully updated pod "labelsupdate82df2d63-a3ae-4561-a420-c0ea1b994174"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:13:36.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4042" for this suite.

• [SLOW TEST:17.933 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1623,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:13:36.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-8hsm
STEP: Creating a pod to test atomic-volume-subpath
Jul 20 14:13:37.701: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8hsm" in namespace "subpath-6644" to be "Succeeded or Failed"
Jul 20 14:13:38.303: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Pending", Reason="", readiness=false. Elapsed: 601.412575ms
Jul 20 14:13:40.307: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.605551651s
Jul 20 14:13:42.459: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.758002404s
Jul 20 14:13:44.656: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.954442992s
Jul 20 14:13:46.800: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 9.098486062s
Jul 20 14:13:48.927: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 11.225557629s
Jul 20 14:13:50.985: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 13.283913928s
Jul 20 14:13:53.190: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 15.488156433s
Jul 20 14:13:55.225: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 17.523525218s
Jul 20 14:13:57.285: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 19.583707661s
Jul 20 14:13:59.361: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 21.659478224s
Jul 20 14:14:01.365: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 23.66333782s
Jul 20 14:14:03.368: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Running", Reason="", readiness=true. Elapsed: 25.666456295s
Jul 20 14:14:05.372: INFO: Pod "pod-subpath-test-downwardapi-8hsm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.670331555s
STEP: Saw pod success
Jul 20 14:14:05.372: INFO: Pod "pod-subpath-test-downwardapi-8hsm" satisfied condition "Succeeded or Failed"
Jul 20 14:14:05.375: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-8hsm container test-container-subpath-downwardapi-8hsm: 
STEP: delete the pod
Jul 20 14:14:05.435: INFO: Waiting for pod pod-subpath-test-downwardapi-8hsm to disappear
Jul 20 14:14:05.530: INFO: Pod pod-subpath-test-downwardapi-8hsm no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-8hsm
Jul 20 14:14:05.530: INFO: Deleting pod "pod-subpath-test-downwardapi-8hsm" in namespace "subpath-6644"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:14:05.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6644" for this suite.

• [SLOW TEST:28.766 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":98,"skipped":1681,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:14:05.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul 20 14:14:05.927: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:14:16.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6208" for this suite.

• [SLOW TEST:10.665 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":99,"skipped":1716,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:14:16.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8417
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 14:14:16.958: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 20 14:14:17.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:14:19.221: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:14:21.054: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:14:23.166: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:25.054: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:27.054: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:29.054: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:31.186: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:33.182: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:35.118: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:37.053: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:14:39.054: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 20 14:14:39.059: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul 20 14:14:41.100: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 20 14:14:51.084: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.56:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8417 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 14:14:51.084: INFO: >>> kubeConfig: /root/.kube/config
I0720 14:14:51.114722       7 log.go:172] (0xc000d32630) (0xc001bd8e60) Create stream
I0720 14:14:51.114760       7 log.go:172] (0xc000d32630) (0xc001bd8e60) Stream added, broadcasting: 1
I0720 14:14:51.118080       7 log.go:172] (0xc000d32630) Reply frame received for 1
I0720 14:14:51.118117       7 log.go:172] (0xc000d32630) (0xc000e8e780) Create stream
I0720 14:14:51.118129       7 log.go:172] (0xc000d32630) (0xc000e8e780) Stream added, broadcasting: 3
I0720 14:14:51.119195       7 log.go:172] (0xc000d32630) Reply frame received for 3
I0720 14:14:51.119242       7 log.go:172] (0xc000d32630) (0xc0011a17c0) Create stream
I0720 14:14:51.119269       7 log.go:172] (0xc000d32630) (0xc0011a17c0) Stream added, broadcasting: 5
I0720 14:14:51.120384       7 log.go:172] (0xc000d32630) Reply frame received for 5
I0720 14:14:51.203187       7 log.go:172] (0xc000d32630) Data frame received for 3
I0720 14:14:51.203226       7 log.go:172] (0xc000e8e780) (3) Data frame handling
I0720 14:14:51.203245       7 log.go:172] (0xc000e8e780) (3) Data frame sent
I0720 14:14:51.203282       7 log.go:172] (0xc000d32630) Data frame received for 5
I0720 14:14:51.203310       7 log.go:172] (0xc0011a17c0) (5) Data frame handling
I0720 14:14:51.203466       7 log.go:172] (0xc000d32630) Data frame received for 3
I0720 14:14:51.203479       7 log.go:172] (0xc000e8e780) (3) Data frame handling
I0720 14:14:51.205286       7 log.go:172] (0xc000d32630) Data frame received for 1
I0720 14:14:51.205307       7 log.go:172] (0xc001bd8e60) (1) Data frame handling
I0720 14:14:51.205320       7 log.go:172] (0xc001bd8e60) (1) Data frame sent
I0720 14:14:51.205332       7 log.go:172] (0xc000d32630) (0xc001bd8e60) Stream removed, broadcasting: 1
I0720 14:14:51.205350       7 log.go:172] (0xc000d32630) Go away received
I0720 14:14:51.205408       7 log.go:172] (0xc000d32630) (0xc001bd8e60) Stream removed, broadcasting: 1
I0720 14:14:51.205459       7 log.go:172] (0xc000d32630) (0xc000e8e780) Stream removed, broadcasting: 3
I0720 14:14:51.205500       7 log.go:172] (0xc000d32630) (0xc0011a17c0) Stream removed, broadcasting: 5
Jul 20 14:14:51.205: INFO: Found all expected endpoints: [netserver-0]
Jul 20 14:14:51.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.148:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8417 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 14:14:51.435: INFO: >>> kubeConfig: /root/.kube/config
I0720 14:14:51.492875       7 log.go:172] (0xc0026e5b80) (0xc00122e280) Create stream
I0720 14:14:51.492911       7 log.go:172] (0xc0026e5b80) (0xc00122e280) Stream added, broadcasting: 1
I0720 14:14:51.495209       7 log.go:172] (0xc0026e5b80) Reply frame received for 1
I0720 14:14:51.495255       7 log.go:172] (0xc0026e5b80) (0xc000ba6780) Create stream
I0720 14:14:51.495271       7 log.go:172] (0xc0026e5b80) (0xc000ba6780) Stream added, broadcasting: 3
I0720 14:14:51.496080       7 log.go:172] (0xc0026e5b80) Reply frame received for 3
I0720 14:14:51.496118       7 log.go:172] (0xc0026e5b80) (0xc00122e320) Create stream
I0720 14:14:51.496127       7 log.go:172] (0xc0026e5b80) (0xc00122e320) Stream added, broadcasting: 5
I0720 14:14:51.497082       7 log.go:172] (0xc0026e5b80) Reply frame received for 5
I0720 14:14:51.564023       7 log.go:172] (0xc0026e5b80) Data frame received for 5
I0720 14:14:51.564062       7 log.go:172] (0xc00122e320) (5) Data frame handling
I0720 14:14:51.564089       7 log.go:172] (0xc0026e5b80) Data frame received for 3
I0720 14:14:51.564100       7 log.go:172] (0xc000ba6780) (3) Data frame handling
I0720 14:14:51.564111       7 log.go:172] (0xc000ba6780) (3) Data frame sent
I0720 14:14:51.564117       7 log.go:172] (0xc0026e5b80) Data frame received for 3
I0720 14:14:51.564122       7 log.go:172] (0xc000ba6780) (3) Data frame handling
I0720 14:14:51.565588       7 log.go:172] (0xc0026e5b80) Data frame received for 1
I0720 14:14:51.565616       7 log.go:172] (0xc00122e280) (1) Data frame handling
I0720 14:14:51.565638       7 log.go:172] (0xc00122e280) (1) Data frame sent
I0720 14:14:51.565655       7 log.go:172] (0xc0026e5b80) (0xc00122e280) Stream removed, broadcasting: 1
I0720 14:14:51.565670       7 log.go:172] (0xc0026e5b80) Go away received
I0720 14:14:51.565776       7 log.go:172] (0xc0026e5b80) (0xc00122e280) Stream removed, broadcasting: 1
I0720 14:14:51.565798       7 log.go:172] (0xc0026e5b80) (0xc000ba6780) Stream removed, broadcasting: 3
I0720 14:14:51.565816       7 log.go:172] (0xc0026e5b80) (0xc00122e320) Stream removed, broadcasting: 5
Jul 20 14:14:51.565: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:14:51.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8417" for this suite.

• [SLOW TEST:35.335 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1722,"failed":0}
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:14:51.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-9886/configmap-test-fe218aba-6c6a-4c38-879f-ef2232bf21e5
STEP: Creating a pod to test consume configMaps
Jul 20 14:14:51.955: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a" in namespace "configmap-9886" to be "Succeeded or Failed"
Jul 20 14:14:51.987: INFO: Pod "pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.304187ms
Jul 20 14:14:53.991: INFO: Pod "pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036548867s
Jul 20 14:14:56.014: INFO: Pod "pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0591094s
Jul 20 14:14:58.028: INFO: Pod "pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072967337s
STEP: Saw pod success
Jul 20 14:14:58.028: INFO: Pod "pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a" satisfied condition "Succeeded or Failed"
Jul 20 14:14:58.309: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a container env-test: 
STEP: delete the pod
Jul 20 14:14:58.637: INFO: Waiting for pod pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a to disappear
Jul 20 14:14:58.641: INFO: Pod pod-configmaps-9e4fd6b7-1d7e-44c7-8d07-4db505a9df5a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:14:58.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9886" for this suite.

• [SLOW TEST:7.190 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1722,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:14:58.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 20 14:14:59.204: INFO: Waiting up to 5m0s for pod "pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b" in namespace "emptydir-1530" to be "Succeeded or Failed"
Jul 20 14:14:59.411: INFO: Pod "pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b": Phase="Pending", Reason="", readiness=false. Elapsed: 206.349663ms
Jul 20 14:15:01.597: INFO: Pod "pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392991436s
Jul 20 14:15:03.651: INFO: Pod "pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446887798s
Jul 20 14:15:05.968: INFO: Pod "pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.763711349s
STEP: Saw pod success
Jul 20 14:15:05.968: INFO: Pod "pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b" satisfied condition "Succeeded or Failed"
Jul 20 14:15:05.970: INFO: Trying to get logs from node kali-worker2 pod pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b container test-container: 
STEP: delete the pod
Jul 20 14:15:06.205: INFO: Waiting for pod pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b to disappear
Jul 20 14:15:06.222: INFO: Pod pod-85fa26a3-f9e5-4409-b45b-d7de0b5fcc6b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:15:06.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1530" for this suite.

• [SLOW TEST:7.438 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1722,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:15:06.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-2e2d0e4c-202e-47e0-88fd-fa8f1e316915
STEP: Creating a pod to test consume secrets
Jul 20 14:15:06.559: INFO: Waiting up to 5m0s for pod "pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4" in namespace "secrets-4166" to be "Succeeded or Failed"
Jul 20 14:15:06.698: INFO: Pod "pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4": Phase="Pending", Reason="", readiness=false. Elapsed: 139.199574ms
Jul 20 14:15:09.205: INFO: Pod "pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645666373s
Jul 20 14:15:11.209: INFO: Pod "pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649924034s
Jul 20 14:15:13.231: INFO: Pod "pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672227628s
Jul 20 14:15:15.235: INFO: Pod "pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.67600436s
STEP: Saw pod success
Jul 20 14:15:15.235: INFO: Pod "pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4" satisfied condition "Succeeded or Failed"
Jul 20 14:15:15.238: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4 container secret-env-test: 
STEP: delete the pod
Jul 20 14:15:15.277: INFO: Waiting for pod pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4 to disappear
Jul 20 14:15:15.375: INFO: Pod pod-secrets-2605a62a-4625-4148-aed7-432ba5a078b4 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:15:15.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4166" for this suite.

• [SLOW TEST:9.155 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1737,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:15:15.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:15:15.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9443" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":104,"skipped":1767,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:15:15.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:15:15.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7" in namespace "downward-api-6910" to be "Succeeded or Failed"
Jul 20 14:15:16.099: INFO: Pod "downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 101.429078ms
Jul 20 14:15:18.363: INFO: Pod "downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365491847s
Jul 20 14:15:20.367: INFO: Pod "downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369067428s
Jul 20 14:15:22.435: INFO: Pod "downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7": Phase="Running", Reason="", readiness=true. Elapsed: 6.437090819s
Jul 20 14:15:24.635: INFO: Pod "downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.637090068s
STEP: Saw pod success
Jul 20 14:15:24.635: INFO: Pod "downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7" satisfied condition "Succeeded or Failed"
Jul 20 14:15:24.639: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7 container client-container: 
STEP: delete the pod
Jul 20 14:15:24.825: INFO: Waiting for pod downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7 to disappear
Jul 20 14:15:24.857: INFO: Pod downwardapi-volume-a5a98ef8-47fd-479b-96cb-474fa847efa7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:15:24.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6910" for this suite.

• [SLOW TEST:9.101 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1775,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:15:24.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:15:36.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5012" for this suite.

• [SLOW TEST:11.610 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":106,"skipped":1866,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:15:36.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0720 14:15:49.538455       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 14:15:49.538: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:15:49.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-371" for this suite.

• [SLOW TEST:13.685 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":107,"skipped":1880,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:15:50.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Jul 20 14:15:51.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config api-versions'
Jul 20 14:15:52.431: INFO: stderr: ""
Jul 20 14:15:52.431: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:15:52.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3865" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":108,"skipped":1884,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:15:52.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:15:53.490: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:15:55.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:15:58.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:15:59.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:16:01.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:16:03.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851353, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:16:06.652: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:16:06.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5433" for this suite.
STEP: Destroying namespace "webhook-5433-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.749 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":109,"skipped":1884,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:16:07.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 20 14:16:07.779: INFO: Waiting up to 5m0s for pod "pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df" in namespace "emptydir-3714" to be "Succeeded or Failed"
Jul 20 14:16:07.918: INFO: Pod "pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df": Phase="Pending", Reason="", readiness=false. Elapsed: 139.013953ms
Jul 20 14:16:10.095: INFO: Pod "pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315900137s
Jul 20 14:16:12.208: INFO: Pod "pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42882685s
Jul 20 14:16:14.394: INFO: Pod "pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.615289065s
STEP: Saw pod success
Jul 20 14:16:14.394: INFO: Pod "pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df" satisfied condition "Succeeded or Failed"
Jul 20 14:16:14.489: INFO: Trying to get logs from node kali-worker2 pod pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df container test-container: 
STEP: delete the pod
Jul 20 14:16:14.665: INFO: Waiting for pod pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df to disappear
Jul 20 14:16:14.759: INFO: Pod pod-f10b7ab8-a7a1-414e-a5b4-c8f8e82c98df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:16:14.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3714" for this suite.

• [SLOW TEST:7.575 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1905,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:16:14.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-e197c599-3388-415f-a7b6-e340c3bb785e
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e197c599-3388-415f-a7b6-e340c3bb785e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:17:36.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8542" for this suite.

• [SLOW TEST:81.923 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1948,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:17:36.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-9936
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9936 to expose endpoints map[]
Jul 20 14:17:38.149: INFO: Get endpoints failed (348.384462ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul 20 14:17:39.497: INFO: successfully validated that service multi-endpoint-test in namespace services-9936 exposes endpoints map[] (1.696473614s elapsed)
STEP: Creating pod pod1 in namespace services-9936
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9936 to expose endpoints map[pod1:[100]]
Jul 20 14:17:44.629: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.125897726s elapsed, will retry)
Jul 20 14:17:46.929: INFO: successfully validated that service multi-endpoint-test in namespace services-9936 exposes endpoints map[pod1:[100]] (7.425917943s elapsed)
STEP: Creating pod pod2 in namespace services-9936
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9936 to expose endpoints map[pod1:[100] pod2:[101]]
Jul 20 14:17:51.411: INFO: Unexpected endpoints: found map[5ba0d3a4-2ceb-4617-a6e6-54888df3a479:[100]], expected map[pod1:[100] pod2:[101]] (4.448040393s elapsed, will retry)
Jul 20 14:17:53.766: INFO: successfully validated that service multi-endpoint-test in namespace services-9936 exposes endpoints map[pod1:[100] pod2:[101]] (6.802696003s elapsed)
STEP: Deleting pod pod1 in namespace services-9936
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9936 to expose endpoints map[pod2:[101]]
Jul 20 14:17:54.539: INFO: successfully validated that service multi-endpoint-test in namespace services-9936 exposes endpoints map[pod2:[101]] (767.973569ms elapsed)
STEP: Deleting pod pod2 in namespace services-9936
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9936 to expose endpoints map[]
Jul 20 14:17:55.718: INFO: successfully validated that service multi-endpoint-test in namespace services-9936 exposes endpoints map[] (1.173589269s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:17:56.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9936" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:19.467 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":112,"skipped":1960,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:17:56.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:17:56.330: INFO: Creating ReplicaSet my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740
Jul 20 14:17:56.442: INFO: Pod name my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740: Found 0 pods out of 1
Jul 20 14:18:01.527: INFO: Pod name my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740: Found 1 pods out of 1
Jul 20 14:18:01.527: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740" is running
Jul 20 14:18:03.731: INFO: Pod "my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740-ncdkz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:17:56 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:17:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:17:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 14:17:56 +0000 UTC Reason: Message:}])
Jul 20 14:18:03.731: INFO: Trying to dial the pod
Jul 20 14:18:08.761: INFO: Controller my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740: Got expected result from replica 1 [my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740-ncdkz]: "my-hostname-basic-d2d71ee7-e884-4049-ba73-c8b53f7e5740-ncdkz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:18:08.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7386" for this suite.

• [SLOW TEST:12.612 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":113,"skipped":1984,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:18:08.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-f0760429-edec-4377-a869-7b859c2f9000
STEP: Creating a pod to test consume configMaps
Jul 20 14:18:09.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4" in namespace "configmap-1893" to be "Succeeded or Failed"
Jul 20 14:18:09.971: INFO: Pod "pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 598.024058ms
Jul 20 14:18:11.975: INFO: Pod "pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.602415602s
Jul 20 14:18:14.425: INFO: Pod "pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.052237077s
Jul 20 14:18:16.742: INFO: Pod "pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4": Phase="Running", Reason="", readiness=true. Elapsed: 7.369216258s
Jul 20 14:18:18.786: INFO: Pod "pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.412917877s
STEP: Saw pod success
Jul 20 14:18:18.786: INFO: Pod "pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4" satisfied condition "Succeeded or Failed"
Jul 20 14:18:18.789: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4 container configmap-volume-test: 
STEP: delete the pod
Jul 20 14:18:18.935: INFO: Waiting for pod pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4 to disappear
Jul 20 14:18:18.989: INFO: Pod pod-configmaps-517ae1d9-5e1e-48f8-88cd-6498b740a1d4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:18:18.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1893" for this suite.

• [SLOW TEST:10.228 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1991,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:18:18.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Jul 20 14:18:20.520: INFO: created pod pod-service-account-defaultsa
Jul 20 14:18:20.520: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul 20 14:18:20.599: INFO: created pod pod-service-account-mountsa
Jul 20 14:18:20.599: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul 20 14:18:20.659: INFO: created pod pod-service-account-nomountsa
Jul 20 14:18:20.659: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul 20 14:18:20.690: INFO: created pod pod-service-account-defaultsa-mountspec
Jul 20 14:18:20.690: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul 20 14:18:20.845: INFO: created pod pod-service-account-mountsa-mountspec
Jul 20 14:18:20.845: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul 20 14:18:20.858: INFO: created pod pod-service-account-nomountsa-mountspec
Jul 20 14:18:20.858: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul 20 14:18:20.907: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul 20 14:18:20.907: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul 20 14:18:21.030: INFO: created pod pod-service-account-mountsa-nomountspec
Jul 20 14:18:21.030: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul 20 14:18:21.034: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul 20 14:18:21.034: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:18:21.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6883" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":115,"skipped":2043,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:18:21.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0720 14:18:24.739399       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 14:18:24.739: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:18:24.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2065" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":116,"skipped":2081,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:18:25.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:18:26.426: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2" in namespace "security-context-test-2622" to be "Succeeded or Failed"
Jul 20 14:18:26.505: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Pending", Reason="", readiness=false. Elapsed: 78.963236ms
Jul 20 14:18:28.774: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348878001s
Jul 20 14:18:31.257: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.831822482s
Jul 20 14:18:33.582: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.156286218s
Jul 20 14:18:35.969: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.543274844s
Jul 20 14:18:37.987: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.561818516s
Jul 20 14:18:40.372: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Running", Reason="", readiness=true. Elapsed: 13.94625165s
Jul 20 14:18:42.574: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.148887634s
Jul 20 14:18:42.574: INFO: Pod "busybox-user-65534-d10be916-55ff-49ce-a7ef-8c330f2966c2" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:18:42.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2622" for this suite.

• [SLOW TEST:17.463 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2092,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:18:42.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jul 20 14:18:42.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6759'
Jul 20 14:18:46.538: INFO: stderr: ""
Jul 20 14:18:46.538: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 14:18:46.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6759'
Jul 20 14:18:46.670: INFO: stderr: ""
Jul 20 14:18:46.670: INFO: stdout: "update-demo-nautilus-k4wzb update-demo-nautilus-tvztl "
Jul 20 14:18:46.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4wzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6759'
Jul 20 14:18:46.820: INFO: stderr: ""
Jul 20 14:18:46.820: INFO: stdout: ""
Jul 20 14:18:46.820: INFO: update-demo-nautilus-k4wzb is created but not running
Jul 20 14:18:51.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6759'
Jul 20 14:18:51.929: INFO: stderr: ""
Jul 20 14:18:51.929: INFO: stdout: "update-demo-nautilus-k4wzb update-demo-nautilus-tvztl "
Jul 20 14:18:51.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4wzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6759'
Jul 20 14:18:52.030: INFO: stderr: ""
Jul 20 14:18:52.030: INFO: stdout: ""
Jul 20 14:18:52.030: INFO: update-demo-nautilus-k4wzb is created but not running
Jul 20 14:18:57.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6759'
Jul 20 14:18:57.143: INFO: stderr: ""
Jul 20 14:18:57.143: INFO: stdout: "update-demo-nautilus-k4wzb update-demo-nautilus-tvztl "
Jul 20 14:18:57.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4wzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6759'
Jul 20 14:18:57.241: INFO: stderr: ""
Jul 20 14:18:57.241: INFO: stdout: "true"
Jul 20 14:18:57.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4wzb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6759'
Jul 20 14:18:57.346: INFO: stderr: ""
Jul 20 14:18:57.346: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 14:18:57.346: INFO: validating pod update-demo-nautilus-k4wzb
Jul 20 14:18:57.351: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 14:18:57.351: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 14:18:57.351: INFO: update-demo-nautilus-k4wzb is verified up and running
Jul 20 14:18:57.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tvztl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6759'
Jul 20 14:18:57.473: INFO: stderr: ""
Jul 20 14:18:57.473: INFO: stdout: "true"
Jul 20 14:18:57.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tvztl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6759'
Jul 20 14:18:57.608: INFO: stderr: ""
Jul 20 14:18:57.608: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 14:18:57.608: INFO: validating pod update-demo-nautilus-tvztl
Jul 20 14:18:57.613: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 14:18:57.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 14:18:57.613: INFO: update-demo-nautilus-tvztl is verified up and running
STEP: using delete to clean up resources
Jul 20 14:18:57.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6759'
Jul 20 14:18:57.789: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 14:18:57.789: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 20 14:18:57.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6759'
Jul 20 14:18:57.893: INFO: stderr: "No resources found in kubectl-6759 namespace.\n"
Jul 20 14:18:57.893: INFO: stdout: ""
Jul 20 14:18:57.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6759 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 14:18:58.389: INFO: stderr: ""
Jul 20 14:18:58.389: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:18:58.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6759" for this suite.

• [SLOW TEST:15.786 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":118,"skipped":2113,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:18:58.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-e9b1dfd8-6bda-4cfa-89fd-b271ede85c42
STEP: Creating a pod to test consume secrets
Jul 20 14:18:59.356: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94" in namespace "projected-7593" to be "Succeeded or Failed"
Jul 20 14:19:00.350: INFO: Pod "pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94": Phase="Pending", Reason="", readiness=false. Elapsed: 994.528623ms
Jul 20 14:19:02.552: INFO: Pod "pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94": Phase="Pending", Reason="", readiness=false. Elapsed: 3.19590007s
Jul 20 14:19:04.838: INFO: Pod "pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94": Phase="Pending", Reason="", readiness=false. Elapsed: 5.482457297s
Jul 20 14:19:06.938: INFO: Pod "pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94": Phase="Running", Reason="", readiness=true. Elapsed: 7.581682452s
Jul 20 14:19:08.942: INFO: Pod "pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.586143176s
STEP: Saw pod success
Jul 20 14:19:08.942: INFO: Pod "pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94" satisfied condition "Succeeded or Failed"
Jul 20 14:19:08.945: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94 container secret-volume-test: 
STEP: delete the pod
Jul 20 14:19:09.121: INFO: Waiting for pod pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94 to disappear
Jul 20 14:19:09.140: INFO: Pod pod-projected-secrets-8f76d070-d690-41c6-bc82-6b6bfcb0bc94 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:19:09.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7593" for this suite.

• [SLOW TEST:10.751 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2117,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:19:09.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:19:09.512: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2075a8e9-0d04-4209-8cbc-c074fda82320" in namespace "security-context-test-7268" to be "Succeeded or Failed"
Jul 20 14:19:09.581: INFO: Pod "busybox-readonly-false-2075a8e9-0d04-4209-8cbc-c074fda82320": Phase="Pending", Reason="", readiness=false. Elapsed: 68.766981ms
Jul 20 14:19:11.585: INFO: Pod "busybox-readonly-false-2075a8e9-0d04-4209-8cbc-c074fda82320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072830699s
Jul 20 14:19:13.755: INFO: Pod "busybox-readonly-false-2075a8e9-0d04-4209-8cbc-c074fda82320": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242651755s
Jul 20 14:19:15.758: INFO: Pod "busybox-readonly-false-2075a8e9-0d04-4209-8cbc-c074fda82320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.246156093s
Jul 20 14:19:15.758: INFO: Pod "busybox-readonly-false-2075a8e9-0d04-4209-8cbc-c074fda82320" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:19:15.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7268" for this suite.

• [SLOW TEST:6.618 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2144,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:19:15.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-cd5c7fe8-c700-4f0f-9a38-5d4a4f10142b in namespace container-probe-7930
Jul 20 14:19:22.406: INFO: Started pod test-webserver-cd5c7fe8-c700-4f0f-9a38-5d4a4f10142b in namespace container-probe-7930
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 14:19:22.471: INFO: Initial restart count of pod test-webserver-cd5c7fe8-c700-4f0f-9a38-5d4a4f10142b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:23:24.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7930" for this suite.

• [SLOW TEST:248.840 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2159,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:23:24.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:23:27.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2755" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":122,"skipped":2180,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:23:27.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7785
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7785
STEP: creating replication controller externalsvc in namespace services-7785
I0720 14:23:30.684903       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7785, replica count: 2
I0720 14:23:33.735347       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:23:36.735556       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:23:39.735756       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jul 20 14:23:39.844: INFO: Creating new exec pod
Jul 20 14:23:46.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7785 execpodhspgv -- /bin/sh -x -c nslookup clusterip-service'
Jul 20 14:23:47.111: INFO: stderr: "I0720 14:23:46.733212    1344 log.go:172] (0xc000a342c0) (0xc000a86280) Create stream\nI0720 14:23:46.733273    1344 log.go:172] (0xc000a342c0) (0xc000a86280) Stream added, broadcasting: 1\nI0720 14:23:46.736348    1344 log.go:172] (0xc000a342c0) Reply frame received for 1\nI0720 14:23:46.736397    1344 log.go:172] (0xc000a342c0) (0xc000a86320) Create stream\nI0720 14:23:46.736495    1344 log.go:172] (0xc000a342c0) (0xc000a86320) Stream added, broadcasting: 3\nI0720 14:23:46.737796    1344 log.go:172] (0xc000a342c0) Reply frame received for 3\nI0720 14:23:46.738207    1344 log.go:172] (0xc000a342c0) (0xc00097a000) Create stream\nI0720 14:23:46.738247    1344 log.go:172] (0xc000a342c0) (0xc00097a000) Stream added, broadcasting: 5\nI0720 14:23:46.740137    1344 log.go:172] (0xc000a342c0) Reply frame received for 5\nI0720 14:23:46.817248    1344 log.go:172] (0xc000a342c0) Data frame received for 5\nI0720 14:23:46.817276    1344 log.go:172] (0xc00097a000) (5) Data frame handling\nI0720 14:23:46.817292    1344 log.go:172] (0xc00097a000) (5) Data frame sent\n+ nslookup clusterip-service\nI0720 14:23:47.100158    1344 log.go:172] (0xc000a342c0) Data frame received for 3\nI0720 14:23:47.100192    1344 log.go:172] (0xc000a86320) (3) Data frame handling\nI0720 14:23:47.100215    1344 log.go:172] (0xc000a86320) (3) Data frame sent\nI0720 14:23:47.101264    1344 log.go:172] (0xc000a342c0) Data frame received for 3\nI0720 14:23:47.101304    1344 log.go:172] (0xc000a86320) (3) Data frame handling\nI0720 14:23:47.101463    1344 log.go:172] (0xc000a86320) (3) Data frame sent\nI0720 14:23:47.101793    1344 log.go:172] (0xc000a342c0) Data frame received for 5\nI0720 14:23:47.101824    1344 log.go:172] (0xc00097a000) (5) Data frame handling\nI0720 14:23:47.101852    1344 log.go:172] (0xc000a342c0) Data frame received for 3\nI0720 14:23:47.101873    1344 log.go:172] (0xc000a86320) (3) Data frame handling\nI0720 14:23:47.104935    1344 log.go:172] (0xc000a342c0) Data frame received for 1\nI0720 14:23:47.104978    1344 log.go:172] (0xc000a86280) (1) Data frame handling\nI0720 14:23:47.105007    1344 log.go:172] (0xc000a86280) (1) Data frame sent\nI0720 14:23:47.105042    1344 log.go:172] (0xc000a342c0) (0xc000a86280) Stream removed, broadcasting: 1\nI0720 14:23:47.105074    1344 log.go:172] (0xc000a342c0) Go away received\nI0720 14:23:47.105595    1344 log.go:172] (0xc000a342c0) (0xc000a86280) Stream removed, broadcasting: 1\nI0720 14:23:47.105630    1344 log.go:172] (0xc000a342c0) (0xc000a86320) Stream removed, broadcasting: 3\nI0720 14:23:47.105649    1344 log.go:172] (0xc000a342c0) (0xc00097a000) Stream removed, broadcasting: 5\n"
Jul 20 14:23:47.111: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7785.svc.cluster.local\tcanonical name = externalsvc.services-7785.svc.cluster.local.\nName:\texternalsvc.services-7785.svc.cluster.local\nAddress: 10.96.240.235\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7785, will wait for the garbage collector to delete the pods
Jul 20 14:23:47.171: INFO: Deleting ReplicationController externalsvc took: 6.153261ms
Jul 20 14:23:47.572: INFO: Terminating ReplicationController externalsvc pods took: 400.242581ms
Jul 20 14:24:03.869: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:24:04.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7785" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:38.616 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":123,"skipped":2181,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:24:05.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-8rdk
STEP: Creating a pod to test atomic-volume-subpath
Jul 20 14:24:07.314: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8rdk" in namespace "subpath-1382" to be "Succeeded or Failed"
Jul 20 14:24:07.333: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.961575ms
Jul 20 14:24:09.338: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023448641s
Jul 20 14:24:11.422: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107574746s
Jul 20 14:24:14.022: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 6.707572223s
Jul 20 14:24:16.729: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 9.414543216s
Jul 20 14:24:19.171: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 11.856932764s
Jul 20 14:24:21.175: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 13.860772835s
Jul 20 14:24:23.178: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 15.86434596s
Jul 20 14:24:25.183: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 17.868591112s
Jul 20 14:24:27.303: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 19.988409937s
Jul 20 14:24:29.307: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 21.992518197s
Jul 20 14:24:31.311: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Running", Reason="", readiness=true. Elapsed: 23.996973572s
Jul 20 14:24:33.542: INFO: Pod "pod-subpath-test-configmap-8rdk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.227598938s
STEP: Saw pod success
Jul 20 14:24:33.542: INFO: Pod "pod-subpath-test-configmap-8rdk" satisfied condition "Succeeded or Failed"
Jul 20 14:24:33.544: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-8rdk container test-container-subpath-configmap-8rdk: 
STEP: delete the pod
Jul 20 14:24:33.747: INFO: Waiting for pod pod-subpath-test-configmap-8rdk to disappear
Jul 20 14:24:33.889: INFO: Pod pod-subpath-test-configmap-8rdk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8rdk
Jul 20 14:24:33.889: INFO: Deleting pod "pod-subpath-test-configmap-8rdk" in namespace "subpath-1382"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:24:33.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1382" for this suite.

• [SLOW TEST:28.056 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":124,"skipped":2188,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:24:33.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:24:36.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:24:38.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851875, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:24:40.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851875, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:24:42.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851876, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730851875, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:24:45.585: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:24:46.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:24:48.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9495" for this suite.
STEP: Destroying namespace "webhook-9495-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.942 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":125,"skipped":2212,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:24:48.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-7a4384a8-a73a-43d2-823f-f9649e49a4c2
STEP: Creating a pod to test consume configMaps
Jul 20 14:24:49.213: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef" in namespace "projected-1950" to be "Succeeded or Failed"
Jul 20 14:24:49.585: INFO: Pod "pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 371.380399ms
Jul 20 14:24:51.861: INFO: Pod "pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.647774289s
Jul 20 14:24:53.914: INFO: Pod "pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.700887791s
Jul 20 14:24:55.997: INFO: Pod "pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.784043436s
Jul 20 14:24:58.001: INFO: Pod "pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.788313867s
STEP: Saw pod success
Jul 20 14:24:58.002: INFO: Pod "pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef" satisfied condition "Succeeded or Failed"
Jul 20 14:24:58.004: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 14:24:58.448: INFO: Waiting for pod pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef to disappear
Jul 20 14:24:58.474: INFO: Pod pod-projected-configmaps-53e02b8d-359f-4399-b225-be2bb3eef6ef no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:24:58.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1950" for this suite.

• [SLOW TEST:9.639 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2219,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:24:58.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-202fbd9e-e182-48f9-a69b-1f679efb2d0e
STEP: Creating a pod to test consume configMaps
Jul 20 14:24:58.830: INFO: Waiting up to 5m0s for pod "pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f" in namespace "configmap-8240" to be "Succeeded or Failed"
Jul 20 14:24:58.941: INFO: Pod "pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f": Phase="Pending", Reason="", readiness=false. Elapsed: 111.011827ms
Jul 20 14:25:01.057: INFO: Pod "pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227441965s
Jul 20 14:25:03.499: INFO: Pod "pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.668903417s
Jul 20 14:25:05.558: INFO: Pod "pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f": Phase="Running", Reason="", readiness=true. Elapsed: 6.727842509s
Jul 20 14:25:07.562: INFO: Pod "pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.732428326s
STEP: Saw pod success
Jul 20 14:25:07.562: INFO: Pod "pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f" satisfied condition "Succeeded or Failed"
Jul 20 14:25:07.565: INFO: Trying to get logs from node kali-worker pod pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f container configmap-volume-test: 
STEP: delete the pod
Jul 20 14:25:07.625: INFO: Waiting for pod pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f to disappear
Jul 20 14:25:07.653: INFO: Pod pod-configmaps-496bc28a-0a65-4f5b-ab4f-78b76886187f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:25:07.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8240" for this suite.

• [SLOW TEST:9.344 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2237,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:25:07.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-0d55919f-b6d7-42e9-9318-c1d1cbba877b in namespace container-probe-1487
Jul 20 14:25:16.139: INFO: Started pod liveness-0d55919f-b6d7-42e9-9318-c1d1cbba877b in namespace container-probe-1487
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 14:25:16.142: INFO: Initial restart count of pod liveness-0d55919f-b6d7-42e9-9318-c1d1cbba877b is 0
Jul 20 14:25:40.457: INFO: Restart count of pod container-probe-1487/liveness-0d55919f-b6d7-42e9-9318-c1d1cbba877b is now 1 (24.314936341s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:25:40.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1487" for this suite.

• [SLOW TEST:32.720 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2277,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:25:40.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:25:41.545: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:25:42.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2947" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":129,"skipped":2280,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:25:42.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 20 14:25:42.980: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 20 14:25:42.991: INFO: Waiting for terminating namespaces to be deleted...
Jul 20 14:25:42.993: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 20 14:25:42.997: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 20 14:25:42.997: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 14:25:42.997: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 20 14:25:42.997: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 14:25:42.997: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 20 14:25:43.000: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 20 14:25:43.000: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 14:25:43.000: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 20 14:25:43.000: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-67a3e5ba-d203-47d7-ae79-4e2b5dd91fa7 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-67a3e5ba-d203-47d7-ae79-4e2b5dd91fa7 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-67a3e5ba-d203-47d7-ae79-4e2b5dd91fa7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:26:09.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8299" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:27.340 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":130,"skipped":2307,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:26:09.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:26:09.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:26:17.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1922" for this suite.

• [SLOW TEST:8.331 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2310,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:26:17.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:26:18.362: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3a2bd34b-a0fd-4821-9807-64f10fc9ac4c", Controller:(*bool)(0xc00085a5ea), BlockOwnerDeletion:(*bool)(0xc00085a5eb)}}
Jul 20 14:26:18.435: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5ea26cdf-2c37-45be-beb3-30bc3af2a792", Controller:(*bool)(0xc0032e0902), BlockOwnerDeletion:(*bool)(0xc0032e0903)}}
Jul 20 14:26:18.567: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"52160661-57cf-448f-a6fe-9b37a65c6ae3", Controller:(*bool)(0xc0032e0af2), BlockOwnerDeletion:(*bool)(0xc0032e0af3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:26:23.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9017" for this suite.

• [SLOW TEST:6.610 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":132,"skipped":2320,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:26:24.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Jul 20 14:26:25.414: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Jul 20 14:26:25.460: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jul 20 14:26:25.460: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Jul 20 14:26:25.585: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jul 20 14:26:25.585: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Jul 20 14:26:25.729: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Jul 20 14:26:25.729: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Jul 20 14:26:34.909: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:26:35.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-4927" for this suite.

• [SLOW TEST:12.433 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":133,"skipped":2332,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:26:36.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:26:40.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5" in namespace "projected-6269" to be "Succeeded or Failed"
Jul 20 14:26:41.942: INFO: Pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.446535527s
Jul 20 14:26:44.429: INFO: Pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.933629399s
Jul 20 14:26:47.364: INFO: Pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.86833905s
Jul 20 14:26:50.371: INFO: Pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.875965449s
Jul 20 14:26:53.939: INFO: Pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.443547497s
Jul 20 14:26:56.208: INFO: Pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.712136022s
STEP: Saw pod success
Jul 20 14:26:56.208: INFO: Pod "downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5" satisfied condition "Succeeded or Failed"
Jul 20 14:26:56.210: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5 container client-container: 
STEP: delete the pod
Jul 20 14:26:56.536: INFO: Waiting for pod downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5 to disappear
Jul 20 14:26:56.682: INFO: Pod downwardapi-volume-11a36d6b-2b01-44ce-aaf8-1d73d61f89a5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:26:56.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6269" for this suite.

• [SLOW TEST:19.987 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2376,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:26:56.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:26:57.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 20 14:27:00.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-468 create -f -'
Jul 20 14:27:02.302: INFO: stderr: ""
Jul 20 14:27:02.302: INFO: stdout: "e2e-test-crd-publish-openapi-8786-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 20 14:27:02.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-468 delete e2e-test-crd-publish-openapi-8786-crds test-cr'
Jul 20 14:27:02.525: INFO: stderr: ""
Jul 20 14:27:02.525: INFO: stdout: "e2e-test-crd-publish-openapi-8786-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jul 20 14:27:02.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-468 apply -f -'
Jul 20 14:27:02.946: INFO: stderr: ""
Jul 20 14:27:02.946: INFO: stdout: "e2e-test-crd-publish-openapi-8786-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 20 14:27:02.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-468 delete e2e-test-crd-publish-openapi-8786-crds test-cr'
Jul 20 14:27:03.355: INFO: stderr: ""
Jul 20 14:27:03.355: INFO: stdout: "e2e-test-crd-publish-openapi-8786-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jul 20 14:27:03.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8786-crds'
Jul 20 14:27:03.649: INFO: stderr: ""
Jul 20 14:27:03.649: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8786-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:27:06.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-468" for this suite.

• [SLOW TEST:10.050 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":135,"skipped":2402,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:27:06.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8095
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul 20 14:27:07.443: INFO: Found 0 stateful pods, waiting for 3
Jul 20 14:27:17.447: INFO: Found 2 stateful pods, waiting for 3
Jul 20 14:27:27.448: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:27:27.448: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:27:27.448: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:27:27.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8095 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 14:27:27.724: INFO: stderr: "I0720 14:27:27.583134    1475 log.go:172] (0xc0005ae210) (0xc0005c75e0) Create stream\nI0720 14:27:27.583212    1475 log.go:172] (0xc0005ae210) (0xc0005c75e0) Stream added, broadcasting: 1\nI0720 14:27:27.586739    1475 log.go:172] (0xc0005ae210) Reply frame received for 1\nI0720 14:27:27.586779    1475 log.go:172] (0xc0005ae210) (0xc000a0a000) Create stream\nI0720 14:27:27.586791    1475 log.go:172] (0xc0005ae210) (0xc000a0a000) Stream added, broadcasting: 3\nI0720 14:27:27.587937    1475 log.go:172] (0xc0005ae210) Reply frame received for 3\nI0720 14:27:27.587984    1475 log.go:172] (0xc0005ae210) (0xc000a0a0a0) Create stream\nI0720 14:27:27.587998    1475 log.go:172] (0xc0005ae210) (0xc000a0a0a0) Stream added, broadcasting: 5\nI0720 14:27:27.589190    1475 log.go:172] (0xc0005ae210) Reply frame received for 5\nI0720 14:27:27.683028    1475 log.go:172] (0xc0005ae210) Data frame received for 5\nI0720 14:27:27.683051    1475 log.go:172] (0xc000a0a0a0) (5) Data frame handling\nI0720 14:27:27.683068    1475 log.go:172] (0xc000a0a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 14:27:27.718093    1475 log.go:172] (0xc0005ae210) Data frame received for 5\nI0720 14:27:27.718137    1475 log.go:172] (0xc000a0a0a0) (5) Data frame handling\nI0720 14:27:27.718163    1475 log.go:172] (0xc0005ae210) Data frame received for 3\nI0720 14:27:27.718175    1475 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0720 14:27:27.718190    1475 log.go:172] (0xc000a0a000) (3) Data frame sent\nI0720 14:27:27.718208    1475 log.go:172] (0xc0005ae210) Data frame received for 3\nI0720 14:27:27.718220    1475 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0720 14:27:27.719669    1475 log.go:172] (0xc0005ae210) Data frame received for 1\nI0720 14:27:27.719682    1475 log.go:172] (0xc0005c75e0) (1) Data frame handling\nI0720 14:27:27.719693    1475 log.go:172] (0xc0005c75e0) (1) Data frame sent\nI0720 14:27:27.719704    1475 log.go:172] (0xc0005ae210) (0xc0005c75e0) Stream removed, broadcasting: 1\nI0720 14:27:27.719712    1475 log.go:172] (0xc0005ae210) Go away received\nI0720 14:27:27.720000    1475 log.go:172] (0xc0005ae210) (0xc0005c75e0) Stream removed, broadcasting: 1\nI0720 14:27:27.720016    1475 log.go:172] (0xc0005ae210) (0xc000a0a000) Stream removed, broadcasting: 3\nI0720 14:27:27.720024    1475 log.go:172] (0xc0005ae210) (0xc000a0a0a0) Stream removed, broadcasting: 5\n"
Jul 20 14:27:27.724: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 14:27:27.724: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 20 14:27:37.757: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 20 14:27:47.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8095 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 14:27:48.542: INFO: stderr: "I0720 14:27:48.423736    1498 log.go:172] (0xc0006e7ce0) (0xc000a7a0a0) Create stream\nI0720 14:27:48.423796    1498 log.go:172] (0xc0006e7ce0) (0xc000a7a0a0) Stream added, broadcasting: 1\nI0720 14:27:48.426453    1498 log.go:172] (0xc0006e7ce0) Reply frame received for 1\nI0720 14:27:48.426499    1498 log.go:172] (0xc0006e7ce0) (0xc000a36000) Create stream\nI0720 14:27:48.426514    1498 log.go:172] (0xc0006e7ce0) (0xc000a36000) Stream added, broadcasting: 3\nI0720 14:27:48.427394    1498 log.go:172] (0xc0006e7ce0) Reply frame received for 3\nI0720 14:27:48.427427    1498 log.go:172] (0xc0006e7ce0) (0xc000a360a0) Create stream\nI0720 14:27:48.427437    1498 log.go:172] (0xc0006e7ce0) (0xc000a360a0) Stream added, broadcasting: 5\nI0720 14:27:48.428167    1498 log.go:172] (0xc0006e7ce0) Reply frame received for 5\nI0720 14:27:48.519713    1498 log.go:172] (0xc0006e7ce0) Data frame received for 5\nI0720 14:27:48.519744    1498 log.go:172] (0xc000a360a0) (5) Data frame handling\nI0720 14:27:48.519766    1498 log.go:172] (0xc000a360a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 14:27:48.536033    1498 log.go:172] (0xc0006e7ce0) Data frame received for 3\nI0720 14:27:48.536056    1498 log.go:172] (0xc000a36000) (3) Data frame handling\nI0720 14:27:48.536145    1498 log.go:172] (0xc000a36000) (3) Data frame sent\nI0720 14:27:48.536319    1498 log.go:172] (0xc0006e7ce0) Data frame received for 5\nI0720 14:27:48.536340    1498 log.go:172] (0xc0006e7ce0) Data frame received for 3\nI0720 14:27:48.536371    1498 log.go:172] (0xc000a36000) (3) Data frame handling\nI0720 14:27:48.536390    1498 log.go:172] (0xc000a360a0) (5) Data frame handling\nI0720 14:27:48.537777    1498 log.go:172] (0xc0006e7ce0) Data frame received for 1\nI0720 14:27:48.537792    1498 log.go:172] (0xc000a7a0a0) (1) Data frame handling\nI0720 14:27:48.537803    1498 log.go:172] (0xc000a7a0a0) (1) Data frame sent\nI0720 14:27:48.537817    1498 log.go:172] (0xc0006e7ce0) (0xc000a7a0a0) Stream removed, broadcasting: 1\nI0720 14:27:48.537829    1498 log.go:172] (0xc0006e7ce0) Go away received\nI0720 14:27:48.538139    1498 log.go:172] (0xc0006e7ce0) (0xc000a7a0a0) Stream removed, broadcasting: 1\nI0720 14:27:48.538153    1498 log.go:172] (0xc0006e7ce0) (0xc000a36000) Stream removed, broadcasting: 3\nI0720 14:27:48.538158    1498 log.go:172] (0xc0006e7ce0) (0xc000a360a0) Stream removed, broadcasting: 5\n"
Jul 20 14:27:48.542: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 14:27:48.542: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 14:27:58.583: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
Jul 20 14:27:58.583: INFO: Waiting for Pod statefulset-8095/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:27:58.583: INFO: Waiting for Pod statefulset-8095/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:27:58.583: INFO: Waiting for Pod statefulset-8095/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:28:08.619: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
Jul 20 14:28:08.619: INFO: Waiting for Pod statefulset-8095/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:28:08.619: INFO: Waiting for Pod statefulset-8095/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:28:19.700: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
Jul 20 14:28:19.700: INFO: Waiting for Pod statefulset-8095/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:28:28.590: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
Jul 20 14:28:28.590: INFO: Waiting for Pod statefulset-8095/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:28:38.590: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
STEP: Rolling back to a previous revision
Jul 20 14:28:48.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8095 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 14:28:52.873: INFO: stderr: "I0720 14:28:52.471828    1520 log.go:172] (0xc000c77810) (0xc000829720) Create stream\nI0720 14:28:52.471920    1520 log.go:172] (0xc000c77810) (0xc000829720) Stream added, broadcasting: 1\nI0720 14:28:52.474463    1520 log.go:172] (0xc000c77810) Reply frame received for 1\nI0720 14:28:52.474521    1520 log.go:172] (0xc000c77810) (0xc000906000) Create stream\nI0720 14:28:52.474534    1520 log.go:172] (0xc000c77810) (0xc000906000) Stream added, broadcasting: 3\nI0720 14:28:52.475543    1520 log.go:172] (0xc000c77810) Reply frame received for 3\nI0720 14:28:52.475593    1520 log.go:172] (0xc000c77810) (0xc000906140) Create stream\nI0720 14:28:52.475617    1520 log.go:172] (0xc000c77810) (0xc000906140) Stream added, broadcasting: 5\nI0720 14:28:52.477342    1520 log.go:172] (0xc000c77810) Reply frame received for 5\nI0720 14:28:52.527962    1520 log.go:172] (0xc000c77810) Data frame received for 5\nI0720 14:28:52.527991    1520 log.go:172] (0xc000906140) (5) Data frame handling\nI0720 14:28:52.528010    1520 log.go:172] (0xc000906140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 14:28:52.863136    1520 log.go:172] (0xc000c77810) Data frame received for 5\nI0720 14:28:52.863184    1520 log.go:172] (0xc000906140) (5) Data frame handling\nI0720 14:28:52.863220    1520 log.go:172] (0xc000c77810) Data frame received for 3\nI0720 14:28:52.863239    1520 log.go:172] (0xc000906000) (3) Data frame handling\nI0720 14:28:52.863274    1520 log.go:172] (0xc000906000) (3) Data frame sent\nI0720 14:28:52.863291    1520 log.go:172] (0xc000c77810) Data frame received for 3\nI0720 14:28:52.863311    1520 log.go:172] (0xc000906000) (3) Data frame handling\nI0720 14:28:52.866045    1520 log.go:172] (0xc000c77810) Data frame received for 1\nI0720 14:28:52.866083    1520 log.go:172] (0xc000829720) (1) Data frame handling\nI0720 14:28:52.866103    1520 log.go:172] (0xc000829720) (1) Data frame sent\nI0720 14:28:52.866124    1520 log.go:172] (0xc000c77810) (0xc000829720) Stream removed, broadcasting: 1\nI0720 14:28:52.866225    1520 log.go:172] (0xc000c77810) Go away received\nI0720 14:28:52.866565    1520 log.go:172] (0xc000c77810) (0xc000829720) Stream removed, broadcasting: 1\nI0720 14:28:52.866596    1520 log.go:172] (0xc000c77810) (0xc000906000) Stream removed, broadcasting: 3\nI0720 14:28:52.866608    1520 log.go:172] (0xc000c77810) (0xc000906140) Stream removed, broadcasting: 5\n"
Jul 20 14:28:52.873: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 14:28:52.873: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 14:29:02.974: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 20 14:29:13.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8095 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 14:29:14.444: INFO: stderr: "I0720 14:29:14.353500    1555 log.go:172] (0xc000915340) (0xc00094a6e0) Create stream\nI0720 14:29:14.353585    1555 log.go:172] (0xc000915340) (0xc00094a6e0) Stream added, broadcasting: 1\nI0720 14:29:14.358969    1555 log.go:172] (0xc000915340) Reply frame received for 1\nI0720 14:29:14.359003    1555 log.go:172] (0xc000915340) (0xc000a8e000) Create stream\nI0720 14:29:14.359010    1555 log.go:172] (0xc000915340) (0xc000a8e000) Stream added, broadcasting: 3\nI0720 14:29:14.359676    1555 log.go:172] (0xc000915340) Reply frame received for 3\nI0720 14:29:14.359715    1555 log.go:172] (0xc000915340) (0xc00094a000) Create stream\nI0720 14:29:14.359730    1555 log.go:172] (0xc000915340) (0xc00094a000) Stream added, broadcasting: 5\nI0720 14:29:14.360374    1555 log.go:172] (0xc000915340) Reply frame received for 5\nI0720 14:29:14.424112    1555 log.go:172] (0xc000915340) Data frame received for 5\nI0720 14:29:14.424132    1555 log.go:172] (0xc00094a000) (5) Data frame handling\nI0720 14:29:14.424148    1555 log.go:172] (0xc00094a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 14:29:14.437596    1555 log.go:172] (0xc000915340) Data frame received for 3\nI0720 14:29:14.437628    1555 log.go:172] (0xc000a8e000) (3) Data frame handling\nI0720 14:29:14.437648    1555 log.go:172] (0xc000a8e000) (3) Data frame sent\nI0720 14:29:14.437688    1555 log.go:172] (0xc000915340) Data frame received for 3\nI0720 14:29:14.437698    1555 log.go:172] (0xc000a8e000) (3) Data frame handling\nI0720 14:29:14.437863    1555 log.go:172] (0xc000915340) Data frame received for 5\nI0720 14:29:14.437876    1555 log.go:172] (0xc00094a000) (5) Data frame handling\nI0720 14:29:14.439313    1555 log.go:172] (0xc000915340) Data frame received for 1\nI0720 14:29:14.439328    1555 log.go:172] (0xc00094a6e0) (1) Data frame handling\nI0720 14:29:14.439335    1555 log.go:172] (0xc00094a6e0) (1) Data frame sent\nI0720 14:29:14.439345    1555 log.go:172] (0xc000915340) (0xc00094a6e0) Stream removed, broadcasting: 1\nI0720 14:29:14.439353    1555 log.go:172] (0xc000915340) Go away received\nI0720 14:29:14.439630    1555 log.go:172] (0xc000915340) (0xc00094a6e0) Stream removed, broadcasting: 1\nI0720 14:29:14.439644    1555 log.go:172] (0xc000915340) (0xc000a8e000) Stream removed, broadcasting: 3\nI0720 14:29:14.439650    1555 log.go:172] (0xc000915340) (0xc00094a000) Stream removed, broadcasting: 5\n"
Jul 20 14:29:14.445: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 14:29:14.445: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 14:29:24.542: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
Jul 20 14:29:24.542: INFO: Waiting for Pod statefulset-8095/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 20 14:29:24.542: INFO: Waiting for Pod statefulset-8095/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 20 14:29:34.549: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
Jul 20 14:29:34.549: INFO: Waiting for Pod statefulset-8095/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 20 14:29:44.548: INFO: Waiting for StatefulSet statefulset-8095/ss2 to complete update
Jul 20 14:29:44.548: INFO: Waiting for Pod statefulset-8095/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 20 14:29:54.550: INFO: Deleting all statefulset in ns statefulset-8095
Jul 20 14:29:54.553: INFO: Scaling statefulset ss2 to 0
Jul 20 14:30:24.571: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 14:30:24.573: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:30:24.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8095" for this suite.

• [SLOW TEST:197.711 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":136,"skipped":2422,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:30:24.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 20 14:30:24.824: INFO: Waiting up to 5m0s for pod "pod-d005a6d9-f773-4d1e-810f-8055e15fc072" in namespace "emptydir-1529" to be "Succeeded or Failed"
Jul 20 14:30:24.869: INFO: Pod "pod-d005a6d9-f773-4d1e-810f-8055e15fc072": Phase="Pending", Reason="", readiness=false. Elapsed: 45.193938ms
Jul 20 14:30:26.873: INFO: Pod "pod-d005a6d9-f773-4d1e-810f-8055e15fc072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049595037s
Jul 20 14:30:28.877: INFO: Pod "pod-d005a6d9-f773-4d1e-810f-8055e15fc072": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05383354s
Jul 20 14:30:31.192: INFO: Pod "pod-d005a6d9-f773-4d1e-810f-8055e15fc072": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367990979s
Jul 20 14:30:33.391: INFO: Pod "pod-d005a6d9-f773-4d1e-810f-8055e15fc072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.567857381s
STEP: Saw pod success
Jul 20 14:30:33.391: INFO: Pod "pod-d005a6d9-f773-4d1e-810f-8055e15fc072" satisfied condition "Succeeded or Failed"
Jul 20 14:30:33.395: INFO: Trying to get logs from node kali-worker pod pod-d005a6d9-f773-4d1e-810f-8055e15fc072 container test-container: 
STEP: delete the pod
Jul 20 14:30:33.664: INFO: Waiting for pod pod-d005a6d9-f773-4d1e-810f-8055e15fc072 to disappear
Jul 20 14:30:33.687: INFO: Pod pod-d005a6d9-f773-4d1e-810f-8055e15fc072 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:30:33.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1529" for this suite.

• [SLOW TEST:9.039 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2454,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:30:33.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5244
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5244
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5244
Jul 20 14:30:34.742: INFO: Found 0 stateful pods, waiting for 1
Jul 20 14:30:44.746: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul 20 14:30:44.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 14:30:45.230: INFO: stderr: "I0720 14:30:44.881322    1575 log.go:172] (0xc000a20a50) (0xc0006b75e0) Create stream\nI0720 14:30:44.881379    1575 log.go:172] (0xc000a20a50) (0xc0006b75e0) Stream added, broadcasting: 1\nI0720 14:30:44.883647    1575 log.go:172] (0xc000a20a50) Reply frame received for 1\nI0720 14:30:44.883687    1575 log.go:172] (0xc000a20a50) (0xc000a06000) Create stream\nI0720 14:30:44.883704    1575 log.go:172] (0xc000a20a50) (0xc000a06000) Stream added, broadcasting: 3\nI0720 14:30:44.884853    1575 log.go:172] (0xc000a20a50) Reply frame received for 3\nI0720 14:30:44.884895    1575 log.go:172] (0xc000a20a50) (0xc0003beaa0) Create stream\nI0720 14:30:44.884923    1575 log.go:172] (0xc000a20a50) (0xc0003beaa0) Stream added, broadcasting: 5\nI0720 14:30:44.885927    1575 log.go:172] (0xc000a20a50) Reply frame received for 5\nI0720 14:30:44.951248    1575 log.go:172] (0xc000a20a50) Data frame received for 5\nI0720 14:30:44.951270    1575 log.go:172] (0xc0003beaa0) (5) Data frame handling\nI0720 14:30:44.951288    1575 log.go:172] (0xc0003beaa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 14:30:45.219603    1575 log.go:172] (0xc000a20a50) Data frame received for 3\nI0720 14:30:45.219629    1575 log.go:172] (0xc000a06000) (3) Data frame handling\nI0720 14:30:45.219636    1575 log.go:172] (0xc000a06000) (3) Data frame sent\nI0720 14:30:45.219642    1575 log.go:172] (0xc000a20a50) Data frame received for 3\nI0720 14:30:45.219646    1575 log.go:172] (0xc000a06000) (3) Data frame handling\nI0720 14:30:45.219677    1575 log.go:172] (0xc000a20a50) Data frame received for 5\nI0720 14:30:45.219705    1575 log.go:172] (0xc0003beaa0) (5) Data frame handling\nI0720 14:30:45.222040    1575 log.go:172] (0xc000a20a50) Data frame received for 1\nI0720 14:30:45.222073    1575 log.go:172] (0xc0006b75e0) (1) Data frame handling\nI0720 14:30:45.222127    1575 log.go:172] (0xc0006b75e0) (1) Data frame sent\nI0720 14:30:45.222156    1575 log.go:172] (0xc000a20a50) (0xc0006b75e0) Stream removed, broadcasting: 1\nI0720 14:30:45.222200    1575 log.go:172] (0xc000a20a50) Go away received\nI0720 14:30:45.222650    1575 log.go:172] (0xc000a20a50) (0xc0006b75e0) Stream removed, broadcasting: 1\nI0720 14:30:45.222675    1575 log.go:172] (0xc000a20a50) (0xc000a06000) Stream removed, broadcasting: 3\nI0720 14:30:45.222693    1575 log.go:172] (0xc000a20a50) (0xc0003beaa0) Stream removed, broadcasting: 5\n"
Jul 20 14:30:45.230: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 14:30:45.230: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 14:30:45.234: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 20 14:30:55.238: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 14:30:55.238: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 14:30:55.288: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999707s
Jul 20 14:30:56.360: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.95853628s
Jul 20 14:30:57.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.886589867s
Jul 20 14:30:58.377: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.882527483s
Jul 20 14:30:59.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.869368297s
Jul 20 14:31:00.395: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.864676639s
Jul 20 14:31:01.399: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.851203638s
Jul 20 14:31:02.403: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.846844023s
Jul 20 14:31:03.408: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.843174123s
Jul 20 14:31:04.697: INFO: Verifying statefulset ss doesn't scale past 1 for another 837.89296ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5244
Jul 20 14:31:05.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 14:31:06.978: INFO: stderr: "I0720 14:31:06.911878    1598 log.go:172] (0xc0000e0dc0) (0xc0007f3900) Create stream\nI0720 14:31:06.911922    1598 log.go:172] (0xc0000e0dc0) (0xc0007f3900) Stream added, broadcasting: 1\nI0720 14:31:06.913781    1598 log.go:172] (0xc0000e0dc0) Reply frame received for 1\nI0720 14:31:06.913808    1598 log.go:172] (0xc0000e0dc0) (0xc00068f860) Create stream\nI0720 14:31:06.913815    1598 log.go:172] (0xc0000e0dc0) (0xc00068f860) Stream added, broadcasting: 3\nI0720 14:31:06.914450    1598 log.go:172] (0xc0000e0dc0) Reply frame received for 3\nI0720 14:31:06.914475    1598 log.go:172] (0xc0000e0dc0) (0xc00044ac80) Create stream\nI0720 14:31:06.914482    1598 log.go:172] (0xc0000e0dc0) (0xc00044ac80) Stream added, broadcasting: 5\nI0720 14:31:06.915194    1598 log.go:172] (0xc0000e0dc0) Reply frame received for 5\nI0720 14:31:06.970223    1598 log.go:172] (0xc0000e0dc0) Data frame received for 3\nI0720 14:31:06.970275    1598 log.go:172] (0xc00068f860) (3) Data frame handling\nI0720 14:31:06.970296    1598 log.go:172] (0xc00068f860) (3) Data frame sent\nI0720 14:31:06.970328    1598 log.go:172] (0xc0000e0dc0) Data frame received for 3\nI0720 14:31:06.970344    1598 log.go:172] (0xc00068f860) (3) Data frame handling\nI0720 14:31:06.970382    1598 log.go:172] (0xc0000e0dc0) Data frame received for 5\nI0720 14:31:06.970417    1598 log.go:172] (0xc00044ac80) (5) Data frame handling\nI0720 14:31:06.970444    1598 log.go:172] (0xc00044ac80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 14:31:06.970474    1598 log.go:172] (0xc0000e0dc0) Data frame received for 5\nI0720 14:31:06.970512    1598 log.go:172] (0xc00044ac80) (5) Data frame handling\nI0720 14:31:06.971790    1598 log.go:172] (0xc0000e0dc0) Data frame received for 1\nI0720 14:31:06.971809    1598 log.go:172] (0xc0007f3900) (1) Data frame handling\nI0720 14:31:06.971819    1598 log.go:172] (0xc0007f3900) (1) Data frame sent\nI0720 14:31:06.971903    1598 log.go:172] (0xc0000e0dc0) (0xc0007f3900) Stream removed, broadcasting: 1\nI0720 14:31:06.971956    1598 log.go:172] (0xc0000e0dc0) Go away received\nI0720 14:31:06.972225    1598 log.go:172] (0xc0000e0dc0) (0xc0007f3900) Stream removed, broadcasting: 1\nI0720 14:31:06.972248    1598 log.go:172] (0xc0000e0dc0) (0xc00068f860) Stream removed, broadcasting: 3\nI0720 14:31:06.972265    1598 log.go:172] (0xc0000e0dc0) (0xc00044ac80) Stream removed, broadcasting: 5\n"
Jul 20 14:31:06.978: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 14:31:06.978: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 14:31:07.079: INFO: Found 1 stateful pods, waiting for 3
Jul 20 14:31:17.217: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:31:17.217: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:31:17.217: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 20 14:31:27.083: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:31:27.083: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:31:27.083: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul 20 14:31:27.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 14:31:27.283: INFO: stderr: "I0720 14:31:27.216986    1619 log.go:172] (0xc0003ca210) (0xc00099e000) Create stream\nI0720 14:31:27.217042    1619 log.go:172] (0xc0003ca210) (0xc00099e000) Stream added, broadcasting: 1\nI0720 14:31:27.219297    1619 log.go:172] (0xc0003ca210) Reply frame received for 1\nI0720 14:31:27.219331    1619 log.go:172] (0xc0003ca210) (0xc00099e0a0) Create stream\nI0720 14:31:27.219346    1619 log.go:172] (0xc0003ca210) (0xc00099e0a0) Stream added, broadcasting: 3\nI0720 14:31:27.220251    1619 log.go:172] (0xc0003ca210) Reply frame received for 3\nI0720 14:31:27.220279    1619 log.go:172] (0xc0003ca210) (0xc00099e140) Create stream\nI0720 14:31:27.220287    1619 log.go:172] (0xc0003ca210) (0xc00099e140) Stream added, broadcasting: 5\nI0720 14:31:27.221198    1619 log.go:172] (0xc0003ca210) Reply frame received for 5\nI0720 14:31:27.277390    1619 log.go:172] (0xc0003ca210) Data frame received for 3\nI0720 14:31:27.277427    1619 log.go:172] (0xc0003ca210) Data frame received for 5\nI0720 14:31:27.277445    1619 log.go:172] (0xc00099e140) (5) Data frame handling\nI0720 14:31:27.277469    1619 log.go:172] (0xc00099e140) (5) Data frame sent\nI0720 14:31:27.277491    1619 log.go:172] (0xc0003ca210) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 14:31:27.277505    1619 log.go:172] (0xc00099e140) (5) Data frame handling\nI0720 14:31:27.277524    1619 log.go:172] (0xc00099e0a0) (3) Data frame handling\nI0720 14:31:27.277537    1619 log.go:172] (0xc00099e0a0) (3) Data frame sent\nI0720 14:31:27.277550    1619 log.go:172] (0xc0003ca210) Data frame received for 3\nI0720 14:31:27.277571    1619 log.go:172] (0xc00099e0a0) (3) Data frame handling\nI0720 14:31:27.278982    1619 log.go:172] (0xc0003ca210) Data frame received for 1\nI0720 14:31:27.279011    1619 log.go:172] (0xc00099e000) (1) Data frame handling\nI0720 14:31:27.279030    1619 log.go:172] (0xc00099e000) (1) Data frame sent\nI0720 14:31:27.279049    1619 log.go:172] (0xc0003ca210) (0xc00099e000) Stream removed, broadcasting: 1\nI0720 14:31:27.279063    1619 log.go:172] (0xc0003ca210) Go away received\nI0720 14:31:27.279419    1619 log.go:172] (0xc0003ca210) (0xc00099e000) Stream removed, broadcasting: 1\nI0720 14:31:27.279432    1619 log.go:172] (0xc0003ca210) (0xc00099e0a0) Stream removed, broadcasting: 3\nI0720 14:31:27.279439    1619 log.go:172] (0xc0003ca210) (0xc00099e140) Stream removed, broadcasting: 5\n"
Jul 20 14:31:27.283: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 14:31:27.283: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 14:31:27.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 14:31:27.536: INFO: stderr: "I0720 14:31:27.406131    1639 log.go:172] (0xc0000e8370) (0xc000816000) Create stream\nI0720 14:31:27.406215    1639 log.go:172] (0xc0000e8370) (0xc000816000) Stream added, broadcasting: 1\nI0720 14:31:27.411595    1639 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0720 14:31:27.411742    1639 log.go:172] (0xc0000e8370) (0xc000027540) Create stream\nI0720 14:31:27.411769    1639 log.go:172] (0xc0000e8370) (0xc000027540) Stream added, broadcasting: 3\nI0720 14:31:27.413881    1639 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0720 14:31:27.413936    1639 log.go:172] (0xc0000e8370) (0xc0008160a0) Create stream\nI0720 14:31:27.413965    1639 log.go:172] (0xc0000e8370) (0xc0008160a0) Stream added, broadcasting: 5\nI0720 14:31:27.414875    1639 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0720 14:31:27.482033    1639 log.go:172] (0xc0000e8370) Data frame received for 5\nI0720 14:31:27.482064    1639 log.go:172] (0xc0008160a0) (5) Data frame handling\nI0720 14:31:27.482083    1639 log.go:172] (0xc0008160a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 14:31:27.527969    1639 log.go:172] (0xc0000e8370) Data frame received for 5\nI0720 14:31:27.528028    1639 log.go:172] (0xc0008160a0) (5) Data frame handling\nI0720 14:31:27.528062    1639 log.go:172] (0xc0000e8370) Data frame received for 3\nI0720 14:31:27.528091    1639 log.go:172] (0xc000027540) (3) Data frame handling\nI0720 14:31:27.528122    1639 log.go:172] (0xc000027540) (3) Data frame sent\nI0720 14:31:27.528142    1639 log.go:172] (0xc0000e8370) Data frame received for 3\nI0720 14:31:27.528160    1639 log.go:172] (0xc000027540) (3) Data frame handling\nI0720 14:31:27.530091    1639 log.go:172] (0xc0000e8370) Data frame received for 1\nI0720 14:31:27.530135    1639 log.go:172] (0xc000816000) (1) Data frame handling\nI0720 14:31:27.530167    1639 log.go:172] (0xc000816000) (1) Data frame sent\nI0720 14:31:27.530206    1639 log.go:172] (0xc0000e8370) (0xc000816000) Stream removed, broadcasting: 1\nI0720 14:31:27.530241    1639 log.go:172] (0xc0000e8370) Go away received\nI0720 14:31:27.530829    1639 log.go:172] (0xc0000e8370) (0xc000816000) Stream removed, broadcasting: 1\nI0720 14:31:27.530866    1639 log.go:172] (0xc0000e8370) (0xc000027540) Stream removed, broadcasting: 3\nI0720 14:31:27.530887    1639 log.go:172] (0xc0000e8370) (0xc0008160a0) Stream removed, broadcasting: 5\n"
Jul 20 14:31:27.536: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 14:31:27.536: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 14:31:27.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 14:31:27.789: INFO: stderr: "I0720 14:31:27.657753    1659 log.go:172] (0xc0009a4210) (0xc000504b40) Create stream\nI0720 14:31:27.657828    1659 log.go:172] (0xc0009a4210) (0xc000504b40) Stream added, broadcasting: 1\nI0720 14:31:27.661704    1659 log.go:172] (0xc0009a4210) Reply frame received for 1\nI0720 14:31:27.661771    1659 log.go:172] (0xc0009a4210) (0xc000956000) Create stream\nI0720 14:31:27.661800    1659 log.go:172] (0xc0009a4210) (0xc000956000) Stream added, broadcasting: 3\nI0720 14:31:27.663008    1659 log.go:172] (0xc0009a4210) Reply frame received for 3\nI0720 14:31:27.663038    1659 log.go:172] (0xc0009a4210) (0xc0007c9540) Create stream\nI0720 14:31:27.663046    1659 log.go:172] (0xc0009a4210) (0xc0007c9540) Stream added, broadcasting: 5\nI0720 14:31:27.664128    1659 log.go:172] (0xc0009a4210) Reply frame received for 5\nI0720 14:31:27.719236    1659 log.go:172] (0xc0009a4210) Data frame received for 5\nI0720 14:31:27.719281    1659 log.go:172] (0xc0007c9540) (5) Data frame handling\nI0720 14:31:27.719308    1659 log.go:172] (0xc0007c9540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 14:31:27.781874    1659 log.go:172] (0xc0009a4210) Data frame received for 5\nI0720 14:31:27.781942    1659 log.go:172] (0xc0007c9540) (5) Data frame handling\nI0720 14:31:27.781986    1659 log.go:172] (0xc0009a4210) Data frame received for 3\nI0720 14:31:27.782020    1659 log.go:172] (0xc000956000) (3) Data frame handling\nI0720 14:31:27.782048    1659 log.go:172] (0xc000956000) (3) Data frame sent\nI0720 14:31:27.782068    1659 log.go:172] (0xc0009a4210) Data frame received for 3\nI0720 14:31:27.782084    1659 log.go:172] (0xc000956000) (3) Data frame handling\nI0720 14:31:27.783846    1659 log.go:172] (0xc0009a4210) Data frame received for 1\nI0720 14:31:27.783885    1659 log.go:172] (0xc000504b40) (1) Data frame handling\nI0720 14:31:27.783926    1659 log.go:172] (0xc000504b40) (1) Data frame sent\nI0720 14:31:27.783959    1659 log.go:172] (0xc0009a4210) (0xc000504b40) Stream removed, broadcasting: 1\nI0720 14:31:27.783988    1659 log.go:172] (0xc0009a4210) Go away received\nI0720 14:31:27.784306    1659 log.go:172] (0xc0009a4210) (0xc000504b40) Stream removed, broadcasting: 1\nI0720 14:31:27.784324    1659 log.go:172] (0xc0009a4210) (0xc000956000) Stream removed, broadcasting: 3\nI0720 14:31:27.784337    1659 log.go:172] (0xc0009a4210) (0xc0007c9540) Stream removed, broadcasting: 5\n"
Jul 20 14:31:27.789: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 14:31:27.789: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 14:31:27.789: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 14:31:27.809: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul 20 14:31:37.816: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 14:31:37.816: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 14:31:37.816: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 14:31:37.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999263s
Jul 20 14:31:39.002: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.952804337s
Jul 20 14:31:40.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.821192371s
Jul 20 14:31:41.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.725993285s
Jul 20 14:31:42.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.695765261s
Jul 20 14:31:43.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.691387425s
Jul 20 14:31:44.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.687187775s
Jul 20 14:31:45.199: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.682614811s
Jul 20 14:31:46.230: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.623913629s
Jul 20 14:31:47.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 592.303771ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5244
Jul 20 14:31:48.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 14:31:48.631: INFO: stderr: "I0720 14:31:48.563198    1681 log.go:172] (0xc00093c8f0) (0xc0006f3680) Create stream\nI0720 14:31:48.563246    1681 log.go:172] (0xc00093c8f0) (0xc0006f3680) Stream added, broadcasting: 1\nI0720 14:31:48.565343    1681 log.go:172] (0xc00093c8f0) Reply frame received for 1\nI0720 14:31:48.565384    1681 log.go:172] (0xc00093c8f0) (0xc0006f3720) Create stream\nI0720 14:31:48.565395    1681 log.go:172] (0xc00093c8f0) (0xc0006f3720) Stream added, broadcasting: 3\nI0720 14:31:48.566137    1681 log.go:172] (0xc00093c8f0) Reply frame received for 3\nI0720 14:31:48.566161    1681 log.go:172] (0xc00093c8f0) (0xc000afc000) Create stream\nI0720 14:31:48.566169    1681 log.go:172] (0xc00093c8f0) (0xc000afc000) Stream added, broadcasting: 5\nI0720 14:31:48.566806    1681 log.go:172] (0xc00093c8f0) Reply frame received for 5\nI0720 14:31:48.616331    1681 log.go:172] (0xc00093c8f0) Data frame received for 5\nI0720 14:31:48.616354    1681 log.go:172] (0xc000afc000) (5) Data frame handling\nI0720 14:31:48.616376    1681 log.go:172] (0xc000afc000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 14:31:48.622933    1681 log.go:172] (0xc00093c8f0) Data frame received for 3\nI0720 14:31:48.622956    1681 log.go:172] (0xc0006f3720) (3) Data frame handling\nI0720 14:31:48.622974    1681 log.go:172] (0xc0006f3720) (3) Data frame sent\nI0720 14:31:48.626799    1681 log.go:172] (0xc00093c8f0) Data frame received for 1\nI0720 14:31:48.626839    1681 log.go:172] (0xc00093c8f0) Data frame received for 3\nI0720 14:31:48.626874    1681 log.go:172] (0xc0006f3720) (3) Data frame handling\nI0720 14:31:48.626910    1681 log.go:172] (0xc0006f3680) (1) Data frame handling\nI0720 14:31:48.626931    1681 log.go:172] (0xc0006f3680) (1) Data frame sent\nI0720 14:31:48.626956    1681 log.go:172] (0xc00093c8f0) (0xc0006f3680) Stream removed, broadcasting: 1\nI0720 14:31:48.626994    1681 log.go:172] (0xc00093c8f0) Data frame received for 5\nI0720 14:31:48.627013    1681 log.go:172] (0xc000afc000) (5) Data frame handling\nI0720 14:31:48.627029    1681 log.go:172] (0xc00093c8f0) Go away received\nI0720 14:31:48.627343    1681 log.go:172] (0xc00093c8f0) (0xc0006f3680) Stream removed, broadcasting: 1\nI0720 14:31:48.627363    1681 log.go:172] (0xc00093c8f0) (0xc0006f3720) Stream removed, broadcasting: 3\nI0720 14:31:48.627375    1681 log.go:172] (0xc00093c8f0) (0xc000afc000) Stream removed, broadcasting: 5\n"
Jul 20 14:31:48.631: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 14:31:48.631: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 14:31:48.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 14:31:48.856: INFO: stderr: "I0720 14:31:48.785745    1702 log.go:172] (0xc000996160) (0xc000a00000) Create stream\nI0720 14:31:48.785796    1702 log.go:172] (0xc000996160) (0xc000a00000) Stream added, broadcasting: 1\nI0720 14:31:48.788212    1702 log.go:172] (0xc000996160) Reply frame received for 1\nI0720 14:31:48.788254    1702 log.go:172] (0xc000996160) (0xc0005af680) Create stream\nI0720 14:31:48.788264    1702 log.go:172] (0xc000996160) (0xc0005af680) Stream added, broadcasting: 3\nI0720 14:31:48.789151    1702 log.go:172] (0xc000996160) Reply frame received for 3\nI0720 14:31:48.789200    1702 log.go:172] (0xc000996160) (0xc0008f6000) Create stream\nI0720 14:31:48.789219    1702 log.go:172] (0xc000996160) (0xc0008f6000) Stream added, broadcasting: 5\nI0720 14:31:48.789900    1702 log.go:172] (0xc000996160) Reply frame received for 5\nI0720 14:31:48.851008    1702 log.go:172] (0xc000996160) Data frame received for 3\nI0720 14:31:48.851030    1702 log.go:172] (0xc0005af680) (3) Data frame handling\nI0720 14:31:48.851041    1702 log.go:172] (0xc0005af680) (3) Data frame sent\nI0720 14:31:48.851061    1702 log.go:172] (0xc000996160) Data frame received for 5\nI0720 14:31:48.851077    1702 log.go:172] (0xc0008f6000) (5) Data frame handling\nI0720 14:31:48.851085    1702 log.go:172] (0xc0008f6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 14:31:48.851101    1702 log.go:172] (0xc000996160) Data frame received for 3\nI0720 14:31:48.851127    1702 log.go:172] (0xc0005af680) (3) Data frame handling\nI0720 14:31:48.851144    1702 log.go:172] (0xc000996160) Data frame received for 5\nI0720 14:31:48.851155    1702 log.go:172] (0xc0008f6000) (5) Data frame handling\nI0720 14:31:48.851918    1702 log.go:172] (0xc000996160) Data frame received for 1\nI0720 14:31:48.851940    1702 log.go:172] (0xc000a00000) (1) Data frame handling\nI0720 14:31:48.851963    1702 log.go:172] (0xc000a00000) (1) Data frame sent\nI0720 14:31:48.851986    1702 log.go:172] (0xc000996160) (0xc000a00000) Stream removed, broadcasting: 1\nI0720 14:31:48.851997    1702 log.go:172] (0xc000996160) Go away received\nI0720 14:31:48.852285    1702 log.go:172] (0xc000996160) (0xc000a00000) Stream removed, broadcasting: 1\nI0720 14:31:48.852307    1702 log.go:172] (0xc000996160) (0xc0005af680) Stream removed, broadcasting: 3\nI0720 14:31:48.852320    1702 log.go:172] (0xc000996160) (0xc0008f6000) Stream removed, broadcasting: 5\n"
Jul 20 14:31:48.856: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 14:31:48.856: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 14:31:48.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5244 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 14:31:49.169: INFO: stderr: "I0720 14:31:49.096167    1721 log.go:172] (0xc00094c000) (0xc000910000) Create stream\nI0720 14:31:49.096239    1721 log.go:172] (0xc00094c000) (0xc000910000) Stream added, broadcasting: 1\nI0720 14:31:49.099729    1721 log.go:172] (0xc00094c000) Reply frame received for 1\nI0720 14:31:49.099780    1721 log.go:172] (0xc00094c000) (0xc0008b6000) Create stream\nI0720 14:31:49.099813    1721 log.go:172] (0xc00094c000) (0xc0008b6000) Stream added, broadcasting: 3\nI0720 14:31:49.101053    1721 log.go:172] (0xc00094c000) Reply frame received for 3\nI0720 14:31:49.101406    1721 log.go:172] (0xc00094c000) (0xc0008e6280) Create stream\nI0720 14:31:49.101427    1721 log.go:172] (0xc00094c000) (0xc0008e6280) Stream added, broadcasting: 5\nI0720 14:31:49.102243    1721 log.go:172] (0xc00094c000) Reply frame received for 5\nI0720 14:31:49.158168    1721 log.go:172] (0xc00094c000) Data frame received for 5\nI0720 14:31:49.158204    1721 log.go:172] (0xc0008e6280) (5) Data frame handling\nI0720 14:31:49.158230    1721 log.go:172] (0xc0008e6280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 14:31:49.161494    1721 log.go:172] (0xc00094c000) Data frame received for 5\nI0720 14:31:49.161514    1721 log.go:172] (0xc0008e6280) (5) Data frame handling\nI0720 14:31:49.161538    1721 log.go:172] (0xc00094c000) Data frame received for 3\nI0720 14:31:49.161547    1721 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0720 14:31:49.161555    1721 log.go:172] (0xc0008b6000) (3) Data frame sent\nI0720 14:31:49.161562    1721 log.go:172] (0xc00094c000) Data frame received for 3\nI0720 14:31:49.161568    1721 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0720 14:31:49.163874    1721 log.go:172] (0xc00094c000) Data frame received for 1\nI0720 14:31:49.163892    1721 log.go:172] (0xc000910000) (1) Data frame handling\nI0720 14:31:49.163901    1721 log.go:172] (0xc000910000) (1) Data frame sent\nI0720 14:31:49.163911    1721 log.go:172] (0xc00094c000) (0xc000910000) Stream removed, broadcasting: 1\nI0720 14:31:49.164243    1721 log.go:172] (0xc00094c000) (0xc000910000) Stream removed, broadcasting: 1\nI0720 14:31:49.164263    1721 log.go:172] (0xc00094c000) (0xc0008b6000) Stream removed, broadcasting: 3\nI0720 14:31:49.164434    1721 log.go:172] (0xc00094c000) (0xc0008e6280) Stream removed, broadcasting: 5\n"
Jul 20 14:31:49.169: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 14:31:49.169: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 14:31:49.169: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 20 14:32:29.620: INFO: Deleting all statefulset in ns statefulset-5244
Jul 20 14:32:29.655: INFO: Scaling statefulset ss to 0
Jul 20 14:32:29.770: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 14:32:29.773: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:32:29.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5244" for this suite.

• [SLOW TEST:116.074 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":138,"skipped":2497,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:32:29.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul 20 14:32:32.690: INFO: Pod name wrapped-volume-race-6ba1a887-cce6-4f0f-b142-9862318ca62e: Found 0 pods out of 5
Jul 20 14:32:38.066: INFO: Pod name wrapped-volume-race-6ba1a887-cce6-4f0f-b142-9862318ca62e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6ba1a887-cce6-4f0f-b142-9862318ca62e in namespace emptydir-wrapper-1184, will wait for the garbage collector to delete the pods
Jul 20 14:33:04.993: INFO: Deleting ReplicationController wrapped-volume-race-6ba1a887-cce6-4f0f-b142-9862318ca62e took: 68.648916ms
Jul 20 14:33:05.693: INFO: Terminating ReplicationController wrapped-volume-race-6ba1a887-cce6-4f0f-b142-9862318ca62e pods took: 700.265264ms
STEP: Creating RC which spawns configmap-volume pods
Jul 20 14:33:23.810: INFO: Pod name wrapped-volume-race-e7ce257e-182c-4c41-87e7-a41f07b7ab27: Found 0 pods out of 5
Jul 20 14:33:28.898: INFO: Pod name wrapped-volume-race-e7ce257e-182c-4c41-87e7-a41f07b7ab27: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e7ce257e-182c-4c41-87e7-a41f07b7ab27 in namespace emptydir-wrapper-1184, will wait for the garbage collector to delete the pods
Jul 20 14:33:50.305: INFO: Deleting ReplicationController wrapped-volume-race-e7ce257e-182c-4c41-87e7-a41f07b7ab27 took: 139.866499ms
Jul 20 14:33:50.905: INFO: Terminating ReplicationController wrapped-volume-race-e7ce257e-182c-4c41-87e7-a41f07b7ab27 pods took: 600.20445ms
STEP: Creating RC which spawns configmap-volume pods
Jul 20 14:34:13.868: INFO: Pod name wrapped-volume-race-38e898a3-ce11-42de-b1ba-de207dd10dd9: Found 0 pods out of 5
Jul 20 14:34:18.898: INFO: Pod name wrapped-volume-race-38e898a3-ce11-42de-b1ba-de207dd10dd9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-38e898a3-ce11-42de-b1ba-de207dd10dd9 in namespace emptydir-wrapper-1184, will wait for the garbage collector to delete the pods
Jul 20 14:34:43.258: INFO: Deleting ReplicationController wrapped-volume-race-38e898a3-ce11-42de-b1ba-de207dd10dd9 took: 125.371401ms
Jul 20 14:34:44.359: INFO: Terminating ReplicationController wrapped-volume-race-38e898a3-ce11-42de-b1ba-de207dd10dd9 pods took: 1.100233602s
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:35:06.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1184" for this suite.

• [SLOW TEST:156.315 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":139,"skipped":2525,"failed":0}
S
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:35:06.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7109, will wait for the garbage collector to delete the pods
Jul 20 14:35:14.631: INFO: Deleting Job.batch foo took: 45.348887ms
Jul 20 14:35:15.031: INFO: Terminating Job.batch foo pods took: 400.264459ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:35:52.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7109" for this suite.

• [SLOW TEST:46.603 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":140,"skipped":2526,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:35:52.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 20 14:35:53.478: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 20 14:35:53.757: INFO: Waiting for terminating namespaces to be deleted...
Jul 20 14:35:53.760: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 20 14:35:53.778: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 20 14:35:53.778: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 14:35:53.778: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 20 14:35:53.778: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 14:35:53.778: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 20 14:35:53.919: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 20 14:35:53.919: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 14:35:53.919: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 20 14:35:53.919: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c6a47861-6bde-445c-86c5-2246ae5a580a 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c6a47861-6bde-445c-86c5-2246ae5a580a off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c6a47861-6bde-445c-86c5-2246ae5a580a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:36:08.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8402" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:16.129 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":141,"skipped":2545,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:36:08.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-a969b3e4-4657-4b2c-8723-563470afdfb0
STEP: Creating a pod to test consume secrets
Jul 20 14:36:09.276: INFO: Waiting up to 5m0s for pod "pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1" in namespace "secrets-9244" to be "Succeeded or Failed"
Jul 20 14:36:09.309: INFO: Pod "pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.162338ms
Jul 20 14:36:12.118: INFO: Pod "pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841882472s
Jul 20 14:36:14.482: INFO: Pod "pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.206305824s
Jul 20 14:36:16.800: INFO: Pod "pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.523970851s
Jul 20 14:36:19.417: INFO: Pod "pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140602665s
STEP: Saw pod success
Jul 20 14:36:19.417: INFO: Pod "pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1" satisfied condition "Succeeded or Failed"
Jul 20 14:36:19.419: INFO: Trying to get logs from node kali-worker pod pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1 container secret-volume-test: 
STEP: delete the pod
Jul 20 14:36:20.220: INFO: Waiting for pod pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1 to disappear
Jul 20 14:36:20.267: INFO: Pod pod-secrets-54c47ec4-fb4e-4418-bd58-7645a8167aa1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:36:20.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9244" for this suite.

• [SLOW TEST:11.691 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2559,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:36:20.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:36:21.800: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul 20 14:36:27.147: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 20 14:36:29.156: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul 20 14:36:31.215: INFO: Creating deployment "test-rollover-deployment"
Jul 20 14:36:31.612: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul 20 14:36:33.626: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul 20 14:36:34.124: INFO: Ensure that both replica sets have 1 created replica
Jul 20 14:36:34.131: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul 20 14:36:34.139: INFO: Updating deployment test-rollover-deployment
Jul 20 14:36:34.139: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul 20 14:36:36.693: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul 20 14:36:37.201: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul 20 14:36:37.217: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:37.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852596, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:39.595: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:39.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852596, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:41.251: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:41.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852596, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:43.223: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:43.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852602, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:45.225: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:45.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852602, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:47.223: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:47.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852602, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:49.222: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:49.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852602, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:51.225: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 14:36:51.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852602, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852592, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:36:54.216: INFO: 
Jul 20 14:36:54.216: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 20 14:36:54.641: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-6297 /apis/apps/v1/namespaces/deployment-6297/deployments/test-rollover-deployment be035ef3-2657-4bdc-9405-3ee7a9b6c986 2740150 2 2020-07-20 14:36:31 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-20 14:36:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-20 14:36:53 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003009d58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-20 14:36:32 +0000 UTC,LastTransitionTime:2020-07-20 14:36:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-07-20 14:36:53 +0000 UTC,LastTransitionTime:2020-07-20 14:36:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul 20 14:36:54.819: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-6297 /apis/apps/v1/namespaces/deployment-6297/replicasets/test-rollover-deployment-84f7f6f64b 0408e8bd-dfd9-42b2-8e83-2fcab5fc0bfa 2740134 2 2020-07-20 14:36:34 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment be035ef3-2657-4bdc-9405-3ee7a9b6c986 0xc0036684c7 0xc0036684c8}] []  [{kube-controller-manager Update apps/v1 2020-07-20 14:36:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 101 48 51 53 101 102 51 45 50 54 53 55 45 52 98 100 99 45 57 52 48 53 45 51 101 101 55 97 57 98 54 99 57 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003668558  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 20 14:36:54.819: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul 20 14:36:54.820: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-6297 /apis/apps/v1/namespaces/deployment-6297/replicasets/test-rollover-controller 87c7f120-3601-4122-bc11-8f5bf8b1b2a8 2740146 2 2020-07-20 14:36:21 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment be035ef3-2657-4bdc-9405-3ee7a9b6c986 0xc00366828f 0xc0036682a0}] []  [{e2e.test Update apps/v1 2020-07-20 14:36:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-20 14:36:53 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 101 48 51 53 101 102 51 45 50 54 53 55 45 52 98 100 99 45 57 52 48 53 45 51 101 101 55 97 57 98 54 99 57 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003668358  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 14:36:54.820: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-6297 /apis/apps/v1/namespaces/deployment-6297/replicasets/test-rollover-deployment-5686c4cfd5 7797785f-5a22-45ba-98b5-cceb9ca9118d 2740080 2 2020-07-20 14:36:31 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment be035ef3-2657-4bdc-9405-3ee7a9b6c986 0xc0036683c7 0xc0036683c8}] []  [{kube-controller-manager Update apps/v1 2020-07-20 14:36:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 101 48 51 53 101 102 51 45 50 54 53 55 45 52 98 100 99 45 57 52 48 53 45 51 101 101 55 97 57 98 54 99 57 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003668458  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 14:36:54.823: INFO: Pod "test-rollover-deployment-84f7f6f64b-tx42m" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-tx42m test-rollover-deployment-84f7f6f64b- deployment-6297 /api/v1/namespaces/deployment-6297/pods/test-rollover-deployment-84f7f6f64b-tx42m 44ef2c82-40f5-4164-ab0f-f52278017b77 2740102 0 2020-07-20 14:36:35 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 0408e8bd-dfd9-42b2-8e83-2fcab5fc0bfa 0xc003668b47 0xc003668b48}] []  [{kube-controller-manager Update v1 2020-07-20 14:36:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 48 56 101 56 98 100 45 100 102 100 57 45 52 50 98 50 45 56 101 56 51 45 50 102 99 97 98 53 102 99 48 98 102 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 14:36:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 48 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2fw9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2fw9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2fw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 14:36:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 14:36:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 14:36:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 14:36:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.105,StartTime:2020-07-20 14:36:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 14:36:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://de24a28ce83c48cd20592db1833c0f714f1a1d76a873ac663f144f8ae04126a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:36:54.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6297" for this suite.

• [SLOW TEST:34.290 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":143,"skipped":2626,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:36:54.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3255
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3255
STEP: creating replication controller externalsvc in namespace services-3255
I0720 14:36:55.996903       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3255, replica count: 2
I0720 14:36:59.047388       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:37:02.047619       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:37:05.047885       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:37:08.048106       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jul 20 14:37:09.072: INFO: Creating new exec pod
Jul 20 14:37:17.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3255 execpodzjztt -- /bin/sh -x -c nslookup nodeport-service'
Jul 20 14:37:21.390: INFO: stderr: "I0720 14:37:21.309007    1742 log.go:172] (0xc0006880b0) (0xc0008f00a0) Create stream\nI0720 14:37:21.309077    1742 log.go:172] (0xc0006880b0) (0xc0008f00a0) Stream added, broadcasting: 1\nI0720 14:37:21.311547    1742 log.go:172] (0xc0006880b0) Reply frame received for 1\nI0720 14:37:21.311585    1742 log.go:172] (0xc0006880b0) (0xc0008f0140) Create stream\nI0720 14:37:21.311607    1742 log.go:172] (0xc0006880b0) (0xc0008f0140) Stream added, broadcasting: 3\nI0720 14:37:21.312391    1742 log.go:172] (0xc0006880b0) Reply frame received for 3\nI0720 14:37:21.312426    1742 log.go:172] (0xc0006880b0) (0xc000494be0) Create stream\nI0720 14:37:21.312435    1742 log.go:172] (0xc0006880b0) (0xc000494be0) Stream added, broadcasting: 5\nI0720 14:37:21.313324    1742 log.go:172] (0xc0006880b0) Reply frame received for 5\nI0720 14:37:21.373159    1742 log.go:172] (0xc0006880b0) Data frame received for 5\nI0720 14:37:21.373191    1742 log.go:172] (0xc000494be0) (5) Data frame handling\nI0720 14:37:21.373213    1742 log.go:172] (0xc000494be0) (5) Data frame sent\n+ nslookup nodeport-service\nI0720 14:37:21.380877    1742 log.go:172] (0xc0006880b0) Data frame received for 3\nI0720 14:37:21.380921    1742 log.go:172] (0xc0008f0140) (3) Data frame handling\nI0720 14:37:21.380955    1742 log.go:172] (0xc0008f0140) (3) Data frame sent\nI0720 14:37:21.381935    1742 log.go:172] (0xc0006880b0) Data frame received for 3\nI0720 14:37:21.381979    1742 log.go:172] (0xc0008f0140) (3) Data frame handling\nI0720 14:37:21.382017    1742 log.go:172] (0xc0008f0140) (3) Data frame sent\nI0720 14:37:21.382395    1742 log.go:172] (0xc0006880b0) Data frame received for 3\nI0720 14:37:21.382419    1742 log.go:172] (0xc0008f0140) (3) Data frame handling\nI0720 14:37:21.382437    1742 log.go:172] (0xc0006880b0) Data frame received for 5\nI0720 14:37:21.382444    1742 log.go:172] (0xc000494be0) (5) Data frame handling\nI0720 14:37:21.384370    1742 log.go:172] (0xc0006880b0) Data frame received for 1\nI0720 14:37:21.384460    1742 log.go:172] (0xc0008f00a0) (1) Data frame handling\nI0720 14:37:21.384540    1742 log.go:172] (0xc0008f00a0) (1) Data frame sent\nI0720 14:37:21.384575    1742 log.go:172] (0xc0006880b0) (0xc0008f00a0) Stream removed, broadcasting: 1\nI0720 14:37:21.384602    1742 log.go:172] (0xc0006880b0) Go away received\nI0720 14:37:21.384954    1742 log.go:172] (0xc0006880b0) (0xc0008f00a0) Stream removed, broadcasting: 1\nI0720 14:37:21.384970    1742 log.go:172] (0xc0006880b0) (0xc0008f0140) Stream removed, broadcasting: 3\nI0720 14:37:21.384978    1742 log.go:172] (0xc0006880b0) (0xc000494be0) Stream removed, broadcasting: 5\n"
Jul 20 14:37:21.391: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3255.svc.cluster.local\tcanonical name = externalsvc.services-3255.svc.cluster.local.\nName:\texternalsvc.services-3255.svc.cluster.local\nAddress: 10.96.94.83\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3255, will wait for the garbage collector to delete the pods
Jul 20 14:37:21.605: INFO: Deleting ReplicationController externalsvc took: 5.698915ms
Jul 20 14:37:22.005: INFO: Terminating ReplicationController externalsvc pods took: 400.226514ms
Jul 20 14:37:43.586: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:37:44.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3255" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:49.435 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":144,"skipped":2627,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:37:44.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul 20 14:37:44.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:38:00.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-873" for this suite.

• [SLOW TEST:17.254 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":145,"skipped":2642,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:38:01.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-1d4244f1-b471-4bb8-a287-b3c0044b87ea
STEP: Creating a pod to test consume configMaps
Jul 20 14:38:02.891: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c" in namespace "configmap-5653" to be "Succeeded or Failed"
Jul 20 14:38:03.268: INFO: Pod "pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 377.589791ms
Jul 20 14:38:05.272: INFO: Pod "pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381606587s
Jul 20 14:38:07.275: INFO: Pod "pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384141461s
Jul 20 14:38:09.428: INFO: Pod "pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537505851s
Jul 20 14:38:11.705: INFO: Pod "pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.814216066s
STEP: Saw pod success
Jul 20 14:38:11.705: INFO: Pod "pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c" satisfied condition "Succeeded or Failed"
Jul 20 14:38:11.707: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c container configmap-volume-test: 
STEP: delete the pod
Jul 20 14:38:12.656: INFO: Waiting for pod pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c to disappear
Jul 20 14:38:12.679: INFO: Pod pod-configmaps-e5823bdf-5bbc-4f80-b3a7-fdf37585fd5c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:38:12.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5653" for this suite.

• [SLOW TEST:11.164 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2656,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:38:12.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul 20 14:38:22.087: INFO: 10 pods remaining
Jul 20 14:38:22.087: INFO: 10 pods has nil DeletionTimestamp
Jul 20 14:38:22.087: INFO: 
Jul 20 14:38:24.245: INFO: 0 pods remaining
Jul 20 14:38:24.245: INFO: 0 pods has nil DeletionTimestamp
Jul 20 14:38:24.245: INFO: 
Jul 20 14:38:25.496: INFO: 0 pods remaining
Jul 20 14:38:25.496: INFO: 0 pods has nil DeletionTimestamp
Jul 20 14:38:25.496: INFO: 
STEP: Gathering metrics
W0720 14:38:27.358774       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 14:38:27.358: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:38:27.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5069" for this suite.

• [SLOW TEST:14.679 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":147,"skipped":2670,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:38:27.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 20 14:38:30.064: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:30.329: INFO: Number of nodes with available pods: 0
Jul 20 14:38:30.329: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:38:31.510: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:31.566: INFO: Number of nodes with available pods: 0
Jul 20 14:38:31.566: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:38:32.479: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:32.484: INFO: Number of nodes with available pods: 0
Jul 20 14:38:32.484: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:38:33.518: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:33.731: INFO: Number of nodes with available pods: 0
Jul 20 14:38:33.731: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:38:34.381: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:34.509: INFO: Number of nodes with available pods: 0
Jul 20 14:38:34.509: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:38:35.339: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:35.618: INFO: Number of nodes with available pods: 0
Jul 20 14:38:35.618: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:38:36.365: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:36.392: INFO: Number of nodes with available pods: 2
Jul 20 14:38:36.392: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul 20 14:38:37.043: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:37.202: INFO: Number of nodes with available pods: 1
Jul 20 14:38:37.202: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:38.207: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:38.211: INFO: Number of nodes with available pods: 1
Jul 20 14:38:38.211: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:39.232: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:39.235: INFO: Number of nodes with available pods: 1
Jul 20 14:38:39.235: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:40.207: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:40.210: INFO: Number of nodes with available pods: 1
Jul 20 14:38:40.210: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:41.209: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:41.407: INFO: Number of nodes with available pods: 1
Jul 20 14:38:41.407: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:42.221: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:42.439: INFO: Number of nodes with available pods: 1
Jul 20 14:38:42.439: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:43.264: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:43.358: INFO: Number of nodes with available pods: 1
Jul 20 14:38:43.358: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:44.287: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:44.290: INFO: Number of nodes with available pods: 1
Jul 20 14:38:44.290: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:45.209: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:45.213: INFO: Number of nodes with available pods: 1
Jul 20 14:38:45.213: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:46.207: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:46.210: INFO: Number of nodes with available pods: 1
Jul 20 14:38:46.211: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:47.206: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:47.210: INFO: Number of nodes with available pods: 1
Jul 20 14:38:47.210: INFO: Node kali-worker2 is running more than one daemon pod
Jul 20 14:38:48.207: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:38:48.211: INFO: Number of nodes with available pods: 2
Jul 20 14:38:48.211: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6874, will wait for the garbage collector to delete the pods
Jul 20 14:38:48.272: INFO: Deleting DaemonSet.extensions daemon-set took: 6.252499ms
Jul 20 14:38:48.573: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.35373ms
Jul 20 14:39:03.902: INFO: Number of nodes with available pods: 0
Jul 20 14:39:03.902: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 14:39:03.905: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6874/daemonsets","resourceVersion":"2740845"},"items":null}

Jul 20 14:39:04.064: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6874/pods","resourceVersion":"2740846"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:39:04.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6874" for this suite.

• [SLOW TEST:36.714 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":148,"skipped":2684,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:39:04.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:39:06.830: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:39:09.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852747, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:39:11.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852747, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:39:14.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852747, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730852746, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:39:17.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:39:17.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4962" for this suite.
STEP: Destroying namespace "webhook-4962-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.333 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":149,"skipped":2686,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:39:18.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:39:19.215: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707" in namespace "projected-7878" to be "Succeeded or Failed"
Jul 20 14:39:19.370: INFO: Pod "downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707": Phase="Pending", Reason="", readiness=false. Elapsed: 155.215754ms
Jul 20 14:39:21.374: INFO: Pod "downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159223413s
Jul 20 14:39:23.447: INFO: Pod "downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23176235s
Jul 20 14:39:25.450: INFO: Pod "downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.234740361s
STEP: Saw pod success
Jul 20 14:39:25.450: INFO: Pod "downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707" satisfied condition "Succeeded or Failed"
Jul 20 14:39:25.453: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707 container client-container: 
STEP: delete the pod
Jul 20 14:39:26.101: INFO: Waiting for pod downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707 to disappear
Jul 20 14:39:26.364: INFO: Pod downwardapi-volume-8f511b84-8b37-44c9-b006-c66a2c4a7707 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:39:26.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7878" for this suite.

• [SLOW TEST:8.180 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2686,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:39:26.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Jul 20 14:39:26.821: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix700077677/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:39:26.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6030" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":151,"skipped":2694,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:39:26.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:40:27.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4580" for this suite.

• [SLOW TEST:60.202 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2703,"failed":0}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:40:27.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul 20 14:40:27.571: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:40:44.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1320" for this suite.

• [SLOW TEST:17.988 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":153,"skipped":2709,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:40:45.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:40:46.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config version'
Jul 20 14:40:47.051: INFO: stderr: ""
Jul 20 14:40:47.051: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.5\", GitCommit:\"e6503f8d8f769ace2f338794c914a96fc335df0f\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:53:46Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:40:47.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1924" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":154,"skipped":2728,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:40:47.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:40:48.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 20 14:40:51.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6241 create -f -'
Jul 20 14:41:20.542: INFO: stderr: ""
Jul 20 14:41:20.542: INFO: stdout: "e2e-test-crd-publish-openapi-936-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 20 14:41:20.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6241 delete e2e-test-crd-publish-openapi-936-crds test-cr'
Jul 20 14:41:20.742: INFO: stderr: ""
Jul 20 14:41:20.742: INFO: stdout: "e2e-test-crd-publish-openapi-936-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul 20 14:41:20.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6241 apply -f -'
Jul 20 14:41:21.537: INFO: stderr: ""
Jul 20 14:41:21.537: INFO: stdout: "e2e-test-crd-publish-openapi-936-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 20 14:41:21.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6241 delete e2e-test-crd-publish-openapi-936-crds test-cr'
Jul 20 14:41:21.685: INFO: stderr: ""
Jul 20 14:41:21.685: INFO: stdout: "e2e-test-crd-publish-openapi-936-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 20 14:41:21.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-936-crds'
Jul 20 14:41:22.397: INFO: stderr: ""
Jul 20 14:41:22.397: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-936-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:41:24.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6241" for this suite.

• [SLOW TEST:37.190 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":155,"skipped":2746,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:41:24.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Jul 20 14:41:24.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info'
Jul 20 14:41:24.595: INFO: stderr: ""
Jul 20 14:41:24.595: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:41:24.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7110" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":156,"skipped":2747,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:41:24.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 in namespace container-probe-1949
Jul 20 14:41:30.959: INFO: Started pod liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 in namespace container-probe-1949
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 14:41:30.962: INFO: Initial restart count of pod liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 is 0
Jul 20 14:41:51.974: INFO: Restart count of pod container-probe-1949/liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 is now 1 (21.011914533s elapsed)
Jul 20 14:42:14.324: INFO: Restart count of pod container-probe-1949/liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 is now 2 (43.361811619s elapsed)
Jul 20 14:42:32.740: INFO: Restart count of pod container-probe-1949/liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 is now 3 (1m1.777972505s elapsed)
Jul 20 14:42:51.304: INFO: Restart count of pod container-probe-1949/liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 is now 4 (1m20.341622251s elapsed)
Jul 20 14:44:01.919: INFO: Restart count of pod container-probe-1949/liveness-f1fea19e-2ac2-49fc-99c8-6844bcb8a3a8 is now 5 (2m30.956491596s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:44:01.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1949" for this suite.

• [SLOW TEST:157.374 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2759,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:44:01.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:44:05.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2899" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2767,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:44:05.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5231.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5231.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 14:44:22.987: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:22.991: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:22.994: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:22.997: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:23.384: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: Get https://172.30.12.66:35995/api/v1/namespaces/dns-5231/pods/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e/proxy/results/wheezy_udp@PodARecord: stream error: stream ID 263; INTERNAL_ERROR
Jul 20 14:44:23.416: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:23.419: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:23.422: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:23.424: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:23.429: INFO: Lookups using dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local]

Jul 20 14:44:28.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:28.811: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:29.003: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:29.007: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:29.018: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:29.021: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:29.025: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:29.028: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:29.046: INFO: Lookups using dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local]

Jul 20 14:44:33.456: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.460: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.463: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.466: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.529: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.534: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.536: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.539: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:33.546: INFO: Lookups using dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local]

Jul 20 14:44:38.435: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.439: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.442: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.446: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.456: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.459: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.462: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.465: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:38.471: INFO: Lookups using dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local]

Jul 20 14:44:44.520: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.201: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.487: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.577: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.580: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.583: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.586: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e: the server could not find the requested resource (get pods dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e)
Jul 20 14:44:45.748: INFO: Lookups using dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local]

Jul 20 14:44:48.470: INFO: DNS probes using dns-5231/dns-test-9ed35fa1-d7aa-4b34-8d63-f0dfc9655f0e succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:44:49.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5231" for this suite.

• [SLOW TEST:43.992 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":159,"skipped":2771,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:44:49.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:44:49.908: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:44:52.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853090, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:44:54.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853090, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:44:56.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853090, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853089, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:44:59.627: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jul 20 14:45:06.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config attach --namespace=webhook-349 to-be-attached-pod -i -c=container1'
Jul 20 14:45:06.585: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:45:06.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-349" for this suite.
STEP: Destroying namespace "webhook-349-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.872 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":160,"skipped":2771,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:45:09.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jul 20 14:45:10.378: INFO: Created pod &Pod{ObjectMeta:{dns-8778  dns-8778 /api/v1/namespaces/dns-8778/pods/dns-8778 d419394e-fbe5-4338-b86b-364beb3549b4 2742211 0 2020-07-20 14:45:10 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-07-20 14:45:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tpktt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tpktt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tpktt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 14:45:10.771: INFO: The status of Pod dns-8778 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:45:13.033: INFO: The status of Pod dns-8778 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:45:14.777: INFO: The status of Pod dns-8778 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:45:16.776: INFO: The status of Pod dns-8778 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Jul 20 14:45:16.776: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8778 PodName:dns-8778 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 14:45:16.776: INFO: >>> kubeConfig: /root/.kube/config
I0720 14:45:16.808951       7 log.go:172] (0xc002bc9a20) (0xc001349860) Create stream
I0720 14:45:16.808983       7 log.go:172] (0xc002bc9a20) (0xc001349860) Stream added, broadcasting: 1
I0720 14:45:16.811244       7 log.go:172] (0xc002bc9a20) Reply frame received for 1
I0720 14:45:16.811279       7 log.go:172] (0xc002bc9a20) (0xc000d223c0) Create stream
I0720 14:45:16.811295       7 log.go:172] (0xc002bc9a20) (0xc000d223c0) Stream added, broadcasting: 3
I0720 14:45:16.812376       7 log.go:172] (0xc002bc9a20) Reply frame received for 3
I0720 14:45:16.812416       7 log.go:172] (0xc002bc9a20) (0xc001541e00) Create stream
I0720 14:45:16.812433       7 log.go:172] (0xc002bc9a20) (0xc001541e00) Stream added, broadcasting: 5
I0720 14:45:16.813780       7 log.go:172] (0xc002bc9a20) Reply frame received for 5
I0720 14:45:16.900251       7 log.go:172] (0xc002bc9a20) Data frame received for 3
I0720 14:45:16.900287       7 log.go:172] (0xc000d223c0) (3) Data frame handling
I0720 14:45:16.900312       7 log.go:172] (0xc000d223c0) (3) Data frame sent
I0720 14:45:16.902113       7 log.go:172] (0xc002bc9a20) Data frame received for 5
I0720 14:45:16.902132       7 log.go:172] (0xc001541e00) (5) Data frame handling
I0720 14:45:16.902252       7 log.go:172] (0xc002bc9a20) Data frame received for 3
I0720 14:45:16.902266       7 log.go:172] (0xc000d223c0) (3) Data frame handling
I0720 14:45:16.904226       7 log.go:172] (0xc002bc9a20) Data frame received for 1
I0720 14:45:16.904244       7 log.go:172] (0xc001349860) (1) Data frame handling
I0720 14:45:16.904353       7 log.go:172] (0xc001349860) (1) Data frame sent
I0720 14:45:16.904387       7 log.go:172] (0xc002bc9a20) (0xc001349860) Stream removed, broadcasting: 1
I0720 14:45:16.904471       7 log.go:172] (0xc002bc9a20) (0xc001349860) Stream removed, broadcasting: 1
I0720 14:45:16.904505       7 log.go:172] (0xc002bc9a20) Go away received
I0720 14:45:16.904579       7 log.go:172] (0xc002bc9a20) (0xc000d223c0) Stream removed, broadcasting: 3
I0720 14:45:16.904632       7 log.go:172] (0xc002bc9a20) (0xc001541e00) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jul 20 14:45:16.904: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8778 PodName:dns-8778 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 14:45:16.904: INFO: >>> kubeConfig: /root/.kube/config
I0720 14:45:17.020431       7 log.go:172] (0xc002d84420) (0xc0020165a0) Create stream
I0720 14:45:17.020470       7 log.go:172] (0xc002d84420) (0xc0020165a0) Stream added, broadcasting: 1
I0720 14:45:17.023148       7 log.go:172] (0xc002d84420) Reply frame received for 1
I0720 14:45:17.023199       7 log.go:172] (0xc002d84420) (0xc001349ae0) Create stream
I0720 14:45:17.023212       7 log.go:172] (0xc002d84420) (0xc001349ae0) Stream added, broadcasting: 3
I0720 14:45:17.024505       7 log.go:172] (0xc002d84420) Reply frame received for 3
I0720 14:45:17.024560       7 log.go:172] (0xc002d84420) (0xc001349b80) Create stream
I0720 14:45:17.024578       7 log.go:172] (0xc002d84420) (0xc001349b80) Stream added, broadcasting: 5
I0720 14:45:17.025724       7 log.go:172] (0xc002d84420) Reply frame received for 5
I0720 14:45:17.096821       7 log.go:172] (0xc002d84420) Data frame received for 3
I0720 14:45:17.096849       7 log.go:172] (0xc001349ae0) (3) Data frame handling
I0720 14:45:17.096866       7 log.go:172] (0xc001349ae0) (3) Data frame sent
I0720 14:45:17.097935       7 log.go:172] (0xc002d84420) Data frame received for 3
I0720 14:45:17.097958       7 log.go:172] (0xc001349ae0) (3) Data frame handling
I0720 14:45:17.097978       7 log.go:172] (0xc002d84420) Data frame received for 5
I0720 14:45:17.098019       7 log.go:172] (0xc001349b80) (5) Data frame handling
I0720 14:45:17.099241       7 log.go:172] (0xc002d84420) Data frame received for 1
I0720 14:45:17.099254       7 log.go:172] (0xc0020165a0) (1) Data frame handling
I0720 14:45:17.099263       7 log.go:172] (0xc0020165a0) (1) Data frame sent
I0720 14:45:17.099276       7 log.go:172] (0xc002d84420) (0xc0020165a0) Stream removed, broadcasting: 1
I0720 14:45:17.099329       7 log.go:172] (0xc002d84420) Go away received
I0720 14:45:17.099366       7 log.go:172] (0xc002d84420) (0xc0020165a0) Stream removed, broadcasting: 1
I0720 14:45:17.099388       7 log.go:172] (0xc002d84420) (0xc001349ae0) Stream removed, broadcasting: 3
I0720 14:45:17.099398       7 log.go:172] (0xc002d84420) (0xc001349b80) Stream removed, broadcasting: 5
Jul 20 14:45:17.099: INFO: Deleting pod dns-8778...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:45:17.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8778" for this suite.

• [SLOW TEST:8.044 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":161,"skipped":2808,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:45:17.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:45:18.085: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:45:20.469: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:45:22.088: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:45:24.090: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:26.090: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:28.092: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:30.089: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:32.248: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:34.123: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:36.153: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:38.177: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:40.152: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = false)
Jul 20 14:45:42.090: INFO: The status of Pod test-webserver-8695ed63-03b3-484a-bad2-b02d93e8b50e is Running (Ready = true)
Jul 20 14:45:42.093: INFO: Container started at 2020-07-20 14:45:22 +0000 UTC, pod became ready at 2020-07-20 14:45:41 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:45:42.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5945" for this suite.

• [SLOW TEST:24.850 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2850,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:45:42.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:45:43.362: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:45:45.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:45:47.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:45:50.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:45:51.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853143, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:45:55.074: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:46:06.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-716" for this suite.
STEP: Destroying namespace "webhook-716-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.905 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":163,"skipped":2875,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:46:07.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 20 14:46:07.940: INFO: Waiting up to 5m0s for pod "pod-73ccc42b-4449-4807-80a0-3f129deccb7c" in namespace "emptydir-8272" to be "Succeeded or Failed"
Jul 20 14:46:07.944: INFO: Pod "pod-73ccc42b-4449-4807-80a0-3f129deccb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490675ms
Jul 20 14:46:10.515: INFO: Pod "pod-73ccc42b-4449-4807-80a0-3f129deccb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.575118248s
Jul 20 14:46:12.621: INFO: Pod "pod-73ccc42b-4449-4807-80a0-3f129deccb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.681230048s
Jul 20 14:46:14.674: INFO: Pod "pod-73ccc42b-4449-4807-80a0-3f129deccb7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.73475806s
STEP: Saw pod success
Jul 20 14:46:14.675: INFO: Pod "pod-73ccc42b-4449-4807-80a0-3f129deccb7c" satisfied condition "Succeeded or Failed"
Jul 20 14:46:14.677: INFO: Trying to get logs from node kali-worker pod pod-73ccc42b-4449-4807-80a0-3f129deccb7c container test-container: 
STEP: delete the pod
Jul 20 14:46:14.730: INFO: Waiting for pod pod-73ccc42b-4449-4807-80a0-3f129deccb7c to disappear
Jul 20 14:46:14.819: INFO: Pod pod-73ccc42b-4449-4807-80a0-3f129deccb7c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:46:14.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8272" for this suite.

• [SLOW TEST:7.880 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2875,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:46:14.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:46:16.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531" in namespace "downward-api-8011" to be "Succeeded or Failed"
Jul 20 14:46:16.131: INFO: Pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531": Phase="Pending", Reason="", readiness=false. Elapsed: 41.935293ms
Jul 20 14:46:18.135: INFO: Pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045507718s
Jul 20 14:46:20.764: INFO: Pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531": Phase="Pending", Reason="", readiness=false. Elapsed: 4.674751742s
Jul 20 14:46:22.842: INFO: Pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752439694s
Jul 20 14:46:24.845: INFO: Pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531": Phase="Running", Reason="", readiness=true. Elapsed: 8.756116448s
Jul 20 14:46:26.902: INFO: Pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.812486146s
STEP: Saw pod success
Jul 20 14:46:26.902: INFO: Pod "downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531" satisfied condition "Succeeded or Failed"
Jul 20 14:46:26.905: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531 container client-container: 
STEP: delete the pod
Jul 20 14:46:26.988: INFO: Waiting for pod downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531 to disappear
Jul 20 14:46:27.032: INFO: Pod downwardapi-volume-06bcc4d9-1acc-44e0-b151-cb81efe97531 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:46:27.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8011" for this suite.

• [SLOW TEST:12.152 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2900,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:46:27.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 20 14:46:27.240: INFO: Waiting up to 5m0s for pod "downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a" in namespace "downward-api-6844" to be "Succeeded or Failed"
Jul 20 14:46:27.506: INFO: Pod "downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a": Phase="Pending", Reason="", readiness=false. Elapsed: 266.053926ms
Jul 20 14:46:29.510: INFO: Pod "downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26981087s
Jul 20 14:46:32.105: INFO: Pod "downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.8650032s
Jul 20 14:46:34.110: INFO: Pod "downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.870289241s
STEP: Saw pod success
Jul 20 14:46:34.110: INFO: Pod "downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a" satisfied condition "Succeeded or Failed"
Jul 20 14:46:34.114: INFO: Trying to get logs from node kali-worker2 pod downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a container dapi-container: 
STEP: delete the pod
Jul 20 14:46:34.341: INFO: Waiting for pod downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a to disappear
Jul 20 14:46:34.430: INFO: Pod downward-api-1fc91527-95e0-428d-8936-4d8b7bebb93a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:46:34.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6844" for this suite.

• [SLOW TEST:7.592 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2919,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:46:34.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7196
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-7196
Jul 20 14:46:35.530: INFO: Found 0 stateful pods, waiting for 1
Jul 20 14:46:45.534: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 20 14:46:45.569: INFO: Deleting all statefulset in ns statefulset-7196
Jul 20 14:46:45.644: INFO: Scaling statefulset ss to 0
Jul 20 14:47:05.700: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 14:47:05.703: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:47:05.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7196" for this suite.

• [SLOW TEST:31.122 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":167,"skipped":2920,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:47:05.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-390dcc26-0554-4a9e-a749-3e4fefe907a0 in namespace container-probe-5441
Jul 20 14:47:12.375: INFO: Started pod liveness-390dcc26-0554-4a9e-a749-3e4fefe907a0 in namespace container-probe-5441
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 14:47:12.397: INFO: Initial restart count of pod liveness-390dcc26-0554-4a9e-a749-3e4fefe907a0 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:51:13.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5441" for this suite.

• [SLOW TEST:247.874 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2951,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:51:13.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8545
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul 20 14:51:15.394: INFO: Found 0 stateful pods, waiting for 3
Jul 20 14:51:25.398: INFO: Found 2 stateful pods, waiting for 3
Jul 20 14:51:35.399: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:51:35.399: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:51:35.399: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 20 14:51:35.426: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul 20 14:51:45.566: INFO: Updating stateful set ss2
Jul 20 14:51:45.710: INFO: Waiting for Pod statefulset-8545/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jul 20 14:51:57.690: INFO: Found 2 stateful pods, waiting for 3
Jul 20 14:52:07.694: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:52:07.694: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:52:07.694: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 20 14:52:17.935: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:52:17.935: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 14:52:17.935: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul 20 14:52:18.026: INFO: Updating stateful set ss2
Jul 20 14:52:19.039: INFO: Waiting for Pod statefulset-8545/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:52:29.046: INFO: Waiting for Pod statefulset-8545/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 14:52:39.060: INFO: Updating stateful set ss2
Jul 20 14:52:39.187: INFO: Waiting for StatefulSet statefulset-8545/ss2 to complete update
Jul 20 14:52:39.187: INFO: Waiting for Pod statefulset-8545/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 20 14:52:49.420: INFO: Deleting all statefulset in ns statefulset-8545
Jul 20 14:52:49.422: INFO: Scaling statefulset ss2 to 0
Jul 20 14:53:29.808: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 14:53:29.811: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:53:29.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8545" for this suite.

• [SLOW TEST:136.222 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":169,"skipped":2955,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:53:29.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Jul 20 14:53:29.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7196'
Jul 20 14:53:34.862: INFO: stderr: ""
Jul 20 14:53:34.862: INFO: stdout: "pod/pause created\n"
Jul 20 14:53:34.862: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul 20 14:53:34.862: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7196" to be "running and ready"
Jul 20 14:53:34.953: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 90.841517ms
Jul 20 14:53:36.970: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10839325s
Jul 20 14:53:38.975: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.113044815s
Jul 20 14:53:38.975: INFO: Pod "pause" satisfied condition "running and ready"
Jul 20 14:53:38.975: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Jul 20 14:53:38.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7196'
Jul 20 14:53:39.084: INFO: stderr: ""
Jul 20 14:53:39.084: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul 20 14:53:39.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7196'
Jul 20 14:53:39.177: INFO: stderr: ""
Jul 20 14:53:39.177: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul 20 14:53:39.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7196'
Jul 20 14:53:39.285: INFO: stderr: ""
Jul 20 14:53:39.285: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul 20 14:53:39.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7196'
Jul 20 14:53:39.391: INFO: stderr: ""
Jul 20 14:53:39.391: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Jul 20 14:53:39.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7196'
Jul 20 14:53:39.631: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 14:53:39.631: INFO: stdout: "pod \"pause\" force deleted\n"
Jul 20 14:53:39.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7196'
Jul 20 14:53:39.745: INFO: stderr: "No resources found in kubectl-7196 namespace.\n"
Jul 20 14:53:39.745: INFO: stdout: ""
Jul 20 14:53:39.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7196 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 14:53:39.897: INFO: stderr: ""
Jul 20 14:53:39.897: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:53:39.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7196" for this suite.

• [SLOW TEST:10.238 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":170,"skipped":2975,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:53:40.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:53:40.335: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0" in namespace "downward-api-7083" to be "Succeeded or Failed"
Jul 20 14:53:40.338: INFO: Pod "downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.526635ms
Jul 20 14:53:42.370: INFO: Pod "downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034728809s
Jul 20 14:53:44.374: INFO: Pod "downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.038805875s
Jul 20 14:53:46.559: INFO: Pod "downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223689459s
STEP: Saw pod success
Jul 20 14:53:46.559: INFO: Pod "downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0" satisfied condition "Succeeded or Failed"
Jul 20 14:53:46.561: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0 container client-container: 
STEP: delete the pod
Jul 20 14:53:46.900: INFO: Waiting for pod downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0 to disappear
Jul 20 14:53:46.962: INFO: Pod downwardapi-volume-6900fb84-606e-4296-a224-2a555755c6d0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:53:46.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7083" for this suite.

• [SLOW TEST:6.880 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":3001,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:53:46.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:53:47.994: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:53:50.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853628, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853628, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853628, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853627, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:53:52.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853628, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853628, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853628, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853627, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:53:55.174: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:53:55.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1459-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:53:56.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1888" for this suite.
STEP: Destroying namespace "webhook-1888-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.532 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":172,"skipped":3012,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:53:56.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:54:00.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9323" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":3013,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:54:00.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-9sfx
STEP: Creating a pod to test atomic-volume-subpath
Jul 20 14:54:00.824: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9sfx" in namespace "subpath-5151" to be "Succeeded or Failed"
Jul 20 14:54:00.837: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.960686ms
Jul 20 14:54:02.982: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158521977s
Jul 20 14:54:04.986: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 4.161906306s
Jul 20 14:54:06.990: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 6.166335558s
Jul 20 14:54:08.995: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 8.171196604s
Jul 20 14:54:10.999: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 10.174830608s
Jul 20 14:54:13.006: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 12.182325331s
Jul 20 14:54:15.011: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 14.1872455s
Jul 20 14:54:17.015: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 16.191152694s
Jul 20 14:54:19.020: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 18.195745666s
Jul 20 14:54:21.024: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 20.200582412s
Jul 20 14:54:23.028: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Running", Reason="", readiness=true. Elapsed: 22.2045703s
Jul 20 14:54:25.085: INFO: Pod "pod-subpath-test-projected-9sfx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.261123874s
STEP: Saw pod success
Jul 20 14:54:25.085: INFO: Pod "pod-subpath-test-projected-9sfx" satisfied condition "Succeeded or Failed"
Jul 20 14:54:25.088: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-9sfx container test-container-subpath-projected-9sfx: 
STEP: delete the pod
Jul 20 14:54:25.266: INFO: Waiting for pod pod-subpath-test-projected-9sfx to disappear
Jul 20 14:54:25.275: INFO: Pod pod-subpath-test-projected-9sfx no longer exists
STEP: Deleting pod pod-subpath-test-projected-9sfx
Jul 20 14:54:25.275: INFO: Deleting pod "pod-subpath-test-projected-9sfx" in namespace "subpath-5151"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:54:25.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5151" for this suite.

• [SLOW TEST:24.692 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":174,"skipped":3021,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:54:25.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:54:36.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4193" for this suite.

• [SLOW TEST:11.509 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":175,"skipped":3024,"failed":0}
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:54:36.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:54:40.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-162" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":3024,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:54:40.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:54:45.457: INFO: Waiting up to 5m0s for pod "client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279" in namespace "pods-3172" to be "Succeeded or Failed"
Jul 20 14:54:45.510: INFO: Pod "client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279": Phase="Pending", Reason="", readiness=false. Elapsed: 53.090484ms
Jul 20 14:54:47.514: INFO: Pod "client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057034398s
Jul 20 14:54:49.518: INFO: Pod "client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279": Phase="Running", Reason="", readiness=true. Elapsed: 4.061637422s
Jul 20 14:54:51.522: INFO: Pod "client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065648556s
STEP: Saw pod success
Jul 20 14:54:51.522: INFO: Pod "client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279" satisfied condition "Succeeded or Failed"
Jul 20 14:54:51.525: INFO: Trying to get logs from node kali-worker pod client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279 container env3cont: 
STEP: delete the pod
Jul 20 14:54:51.649: INFO: Waiting for pod client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279 to disappear
Jul 20 14:54:51.653: INFO: Pod client-envvars-0691d1b6-9bb2-4a2a-94e8-ba541d59e279 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:54:51.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3172" for this suite.

• [SLOW TEST:10.699 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3040,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:54:51.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jul 20 14:54:51.832: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Jul 20 14:54:52.772: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jul 20 14:54:55.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:54:57.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853692, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:55:00.119: INFO: Waited 622.251293ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:55:01.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3582" for this suite.

• [SLOW TEST:10.634 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":178,"skipped":3058,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:55:02.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:55:03.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c" in namespace "downward-api-8305" to be "Succeeded or Failed"
Jul 20 14:55:03.438: INFO: Pod "downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 65.394778ms
Jul 20 14:55:05.442: INFO: Pod "downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069515271s
Jul 20 14:55:07.726: INFO: Pod "downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353154019s
Jul 20 14:55:09.731: INFO: Pod "downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.357737341s
STEP: Saw pod success
Jul 20 14:55:09.731: INFO: Pod "downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c" satisfied condition "Succeeded or Failed"
Jul 20 14:55:09.734: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c container client-container: 
STEP: delete the pod
Jul 20 14:55:09.769: INFO: Waiting for pod downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c to disappear
Jul 20 14:55:09.781: INFO: Pod downwardapi-volume-344be482-3d7c-45d6-99bd-0de6fb88c08c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:55:09.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8305" for this suite.

• [SLOW TEST:7.501 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3062,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:55:09.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul 20 14:55:09.945: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2548 /api/v1/namespaces/watch-2548/configmaps/e2e-watch-test-resource-version 690ff442-fec8-46ef-9ff2-c70a9a1e6bad 2744853 0 2020-07-20 14:55:09 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-07-20 14:55:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 20 14:55:09.945: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2548 /api/v1/namespaces/watch-2548/configmaps/e2e-watch-test-resource-version 690ff442-fec8-46ef-9ff2-c70a9a1e6bad 2744854 0 2020-07-20 14:55:09 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-07-20 14:55:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:55:09.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2548" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":180,"skipped":3103,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:55:09.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 14:55:10.565: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 14:55:12.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 14:55:14.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730853710, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 14:55:17.972: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:55:19.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3252" for this suite.
STEP: Destroying namespace "webhook-3252-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.650 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":181,"skipped":3103,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:55:19.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 20 14:55:19.765: INFO: Waiting up to 5m0s for pod "pod-96f8f188-7779-4730-a6f9-b11d6da84575" in namespace "emptydir-2836" to be "Succeeded or Failed"
Jul 20 14:55:19.769: INFO: Pod "pod-96f8f188-7779-4730-a6f9-b11d6da84575": Phase="Pending", Reason="", readiness=false. Elapsed: 4.69192ms
Jul 20 14:55:21.894: INFO: Pod "pod-96f8f188-7779-4730-a6f9-b11d6da84575": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128921189s
Jul 20 14:55:23.966: INFO: Pod "pod-96f8f188-7779-4730-a6f9-b11d6da84575": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200851244s
STEP: Saw pod success
Jul 20 14:55:23.966: INFO: Pod "pod-96f8f188-7779-4730-a6f9-b11d6da84575" satisfied condition "Succeeded or Failed"
Jul 20 14:55:23.968: INFO: Trying to get logs from node kali-worker pod pod-96f8f188-7779-4730-a6f9-b11d6da84575 container test-container: 
STEP: delete the pod
Jul 20 14:55:24.002: INFO: Waiting for pod pod-96f8f188-7779-4730-a6f9-b11d6da84575 to disappear
Jul 20 14:55:24.026: INFO: Pod pod-96f8f188-7779-4730-a6f9-b11d6da84575 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:55:24.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2836" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3108,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:55:24.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jul 20 14:55:24.369: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:55:43.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6405" for this suite.

• [SLOW TEST:19.562 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3121,"failed":0}
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:55:43.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:55:43.722: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c4b1adc2-e65b-48f1-8719-fec138922a59" in namespace "security-context-test-1096" to be "Succeeded or Failed"
Jul 20 14:55:43.728: INFO: Pod "alpine-nnp-false-c4b1adc2-e65b-48f1-8719-fec138922a59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285468ms
Jul 20 14:55:45.733: INFO: Pod "alpine-nnp-false-c4b1adc2-e65b-48f1-8719-fec138922a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010880033s
Jul 20 14:55:47.737: INFO: Pod "alpine-nnp-false-c4b1adc2-e65b-48f1-8719-fec138922a59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015798472s
Jul 20 14:55:49.742: INFO: Pod "alpine-nnp-false-c4b1adc2-e65b-48f1-8719-fec138922a59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020282877s
Jul 20 14:55:49.742: INFO: Pod "alpine-nnp-false-c4b1adc2-e65b-48f1-8719-fec138922a59" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:55:49.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1096" for this suite.

• [SLOW TEST:6.192 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3121,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:55:49.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 20 14:55:49.892: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4495 /api/v1/namespaces/watch-4495/configmaps/e2e-watch-test-label-changed 1320b303-315f-41ae-abba-2af41c072c79 2745141 0 2020-07-20 14:55:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-20 14:55:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 20 14:55:49.892: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4495 /api/v1/namespaces/watch-4495/configmaps/e2e-watch-test-label-changed 1320b303-315f-41ae-abba-2af41c072c79 2745143 0 2020-07-20 14:55:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-20 14:55:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 20 14:55:49.892: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4495 /api/v1/namespaces/watch-4495/configmaps/e2e-watch-test-label-changed 1320b303-315f-41ae-abba-2af41c072c79 2745144 0 2020-07-20 14:55:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-20 14:55:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 20 14:56:00.105: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4495 /api/v1/namespaces/watch-4495/configmaps/e2e-watch-test-label-changed 1320b303-315f-41ae-abba-2af41c072c79 2745185 0 2020-07-20 14:55:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-20 14:55:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 20 14:56:00.105: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4495 /api/v1/namespaces/watch-4495/configmaps/e2e-watch-test-label-changed 1320b303-315f-41ae-abba-2af41c072c79 2745187 0 2020-07-20 14:55:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-20 14:56:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 20 14:56:00.106: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4495 /api/v1/namespaces/watch-4495/configmaps/e2e-watch-test-label-changed 1320b303-315f-41ae-abba-2af41c072c79 2745188 0 2020-07-20 14:55:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-20 14:56:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:00.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4495" for this suite.

• [SLOW TEST:10.382 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":185,"skipped":3148,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:00.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-eb5b14b7-2ef9-4e2f-8762-80d51266c528
STEP: Creating a pod to test consume secrets
Jul 20 14:56:00.418: INFO: Waiting up to 5m0s for pod "pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841" in namespace "secrets-8196" to be "Succeeded or Failed"
Jul 20 14:56:00.422: INFO: Pod "pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269293ms
Jul 20 14:56:02.433: INFO: Pod "pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014470859s
Jul 20 14:56:04.469: INFO: Pod "pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050800482s
Jul 20 14:56:06.473: INFO: Pod "pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055058939s
STEP: Saw pod success
Jul 20 14:56:06.473: INFO: Pod "pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841" satisfied condition "Succeeded or Failed"
Jul 20 14:56:06.476: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841 container secret-volume-test: 
STEP: delete the pod
Jul 20 14:56:06.542: INFO: Waiting for pod pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841 to disappear
Jul 20 14:56:06.555: INFO: Pod pod-secrets-67ddfe10-f5fc-469b-b115-ba44a98c9841 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:06.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8196" for this suite.

• [SLOW TEST:6.391 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3152,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:06.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:56:06.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045" in namespace "projected-8742" to be "Succeeded or Failed"
Jul 20 14:56:06.750: INFO: Pod "downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045": Phase="Pending", Reason="", readiness=false. Elapsed: 43.947556ms
Jul 20 14:56:08.754: INFO: Pod "downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047595082s
Jul 20 14:56:11.062: INFO: Pod "downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045": Phase="Running", Reason="", readiness=true. Elapsed: 4.355913316s
Jul 20 14:56:13.066: INFO: Pod "downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.359830262s
STEP: Saw pod success
Jul 20 14:56:13.066: INFO: Pod "downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045" satisfied condition "Succeeded or Failed"
Jul 20 14:56:13.069: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045 container client-container: 
STEP: delete the pod
Jul 20 14:56:13.111: INFO: Waiting for pod downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045 to disappear
Jul 20 14:56:13.122: INFO: Pod downwardapi-volume-46d587c5-239d-4d1e-ab71-d11d6aaa3045 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:13.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8742" for this suite.

• [SLOW TEST:6.588 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3188,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:13.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:56:13.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3573'
Jul 20 14:56:13.529: INFO: stderr: ""
Jul 20 14:56:13.529: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jul 20 14:56:13.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3573'
Jul 20 14:56:13.875: INFO: stderr: ""
Jul 20 14:56:13.875: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 20 14:56:14.885: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 14:56:14.885: INFO: Found 0 / 1
Jul 20 14:56:15.880: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 14:56:15.880: INFO: Found 0 / 1
Jul 20 14:56:16.880: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 14:56:16.880: INFO: Found 1 / 1
Jul 20 14:56:16.880: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 20 14:56:16.883: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 14:56:16.883: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 20 14:56:16.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe pod agnhost-master-9cqp6 --namespace=kubectl-3573'
Jul 20 14:56:16.986: INFO: stderr: ""
Jul 20 14:56:16.986: INFO: stdout: "Name:         agnhost-master-9cqp6\nNamespace:    kubectl-3573\nPriority:     0\nNode:         kali-worker2/172.18.0.15\nStart Time:   Mon, 20 Jul 2020 14:56:13 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.224\nIPs:\n  IP:           10.244.1.224\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://4ae8bc4b163a8d258af7e5503f7e43690950bbeda4b871c3d9862b0e8cbf591f\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 20 Jul 2020 14:56:16 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cbcgj (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-cbcgj:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-cbcgj\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  3s    default-scheduler      Successfully assigned kubectl-3573/agnhost-master-9cqp6 to kali-worker2\n  Normal  Pulled     2s    kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    1s    kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    0s    kubelet, kali-worker2  Started container agnhost-master\n"
Jul 20 14:56:16.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3573'
Jul 20 14:56:17.102: INFO: stderr: ""
Jul 20 14:56:17.102: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3573\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-master-9cqp6\n"
Jul 20 14:56:17.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3573'
Jul 20 14:56:17.258: INFO: stderr: ""
Jul 20 14:56:17.258: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3573\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.104.102.58\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.224:6379\nSession Affinity:  None\nEvents:            \n"
Jul 20 14:56:17.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Jul 20 14:56:17.387: INFO: stderr: ""
Jul 20 14:56:17.387: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 10 Jul 2020 10:27:46 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Mon, 20 Jul 2020 14:56:13 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 20 Jul 2020 14:52:00 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 20 Jul 2020 14:52:00 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 20 Jul 2020 14:52:00 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 20 Jul 2020 14:52:00 +0000   Fri, 10 Jul 2020 10:28:23 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.16\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 d83d42c4b42d4de1b3233683d9cadf95\n  System UUID:                e06c57c7-ce4f-4ae9-8bb6-40f1dc0e1a64\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.0-beta.1-34-g49b0743c\n  Kubelet Version:            v1.18.4\n  Kube-Proxy Version:         v1.18.4\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-qtcqs                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     10d\n  kube-system                 coredns-66bff467f8-tjkg9                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     10d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kindnet-zxw2f                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      10d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kube-proxy-xmqbs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         10d\n  local-path-storage          local-path-provisioner-67795f75bd-clsb6       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Jul 20 14:56:17.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe namespace kubectl-3573'
Jul 20 14:56:17.516: INFO: stderr: ""
Jul 20 14:56:17.516: INFO: stdout: "Name:         kubectl-3573\nLabels:       e2e-framework=kubectl\n              e2e-run=0eee3290-f559-4eda-8f35-b684cd40747d\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:17.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3573" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":188,"skipped":3225,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:17.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4327
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4327
I0720 14:56:17.846505       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4327, replica count: 2
I0720 14:56:20.897036       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:56:23.897265       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 14:56:23.897: INFO: Creating new exec pod
Jul 20 14:56:31.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-4327 execpodtmk8h -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 20 14:56:31.547: INFO: stderr: "I0720 14:56:31.478887    2285 log.go:172] (0xc0009e33f0) (0xc00099e780) Create stream\nI0720 14:56:31.478940    2285 log.go:172] (0xc0009e33f0) (0xc00099e780) Stream added, broadcasting: 1\nI0720 14:56:31.480690    2285 log.go:172] (0xc0009e33f0) Reply frame received for 1\nI0720 14:56:31.480843    2285 log.go:172] (0xc0009e33f0) (0xc000aa6820) Create stream\nI0720 14:56:31.480859    2285 log.go:172] (0xc0009e33f0) (0xc000aa6820) Stream added, broadcasting: 3\nI0720 14:56:31.482260    2285 log.go:172] (0xc0009e33f0) Reply frame received for 3\nI0720 14:56:31.482289    2285 log.go:172] (0xc0009e33f0) (0xc0009dc3c0) Create stream\nI0720 14:56:31.482300    2285 log.go:172] (0xc0009e33f0) (0xc0009dc3c0) Stream added, broadcasting: 5\nI0720 14:56:31.483180    2285 log.go:172] (0xc0009e33f0) Reply frame received for 5\nI0720 14:56:31.541677    2285 log.go:172] (0xc0009e33f0) Data frame received for 3\nI0720 14:56:31.541713    2285 log.go:172] (0xc0009e33f0) Data frame received for 5\nI0720 14:56:31.541742    2285 log.go:172] (0xc0009dc3c0) (5) Data frame handling\nI0720 14:56:31.541759    2285 log.go:172] (0xc0009dc3c0) (5) Data frame sent\nI0720 14:56:31.541773    2285 log.go:172] (0xc0009e33f0) Data frame received for 5\nI0720 14:56:31.541784    2285 log.go:172] (0xc0009dc3c0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0720 14:56:31.541824    2285 log.go:172] (0xc000aa6820) (3) Data frame handling\nI0720 14:56:31.543061    2285 log.go:172] (0xc0009e33f0) Data frame received for 1\nI0720 14:56:31.543091    2285 log.go:172] (0xc00099e780) (1) Data frame handling\nI0720 14:56:31.543112    2285 log.go:172] (0xc00099e780) (1) Data frame sent\nI0720 14:56:31.543138    2285 log.go:172] (0xc0009e33f0) (0xc00099e780) Stream removed, broadcasting: 1\nI0720 14:56:31.543151    2285 log.go:172] (0xc0009e33f0) Go away received\nI0720 14:56:31.543438    2285 log.go:172] (0xc0009e33f0) (0xc00099e780) Stream removed, broadcasting: 1\nI0720 14:56:31.543455    2285 log.go:172] (0xc0009e33f0) (0xc000aa6820) Stream removed, broadcasting: 3\nI0720 14:56:31.543464    2285 log.go:172] (0xc0009e33f0) (0xc0009dc3c0) Stream removed, broadcasting: 5\n"
Jul 20 14:56:31.548: INFO: stdout: ""
Jul 20 14:56:31.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-4327 execpodtmk8h -- /bin/sh -x -c nc -zv -t -w 2 10.110.169.119 80'
Jul 20 14:56:31.750: INFO: stderr: "I0720 14:56:31.663738    2303 log.go:172] (0xc000742580) (0xc00064f860) Create stream\nI0720 14:56:31.663779    2303 log.go:172] (0xc000742580) (0xc00064f860) Stream added, broadcasting: 1\nI0720 14:56:31.666386    2303 log.go:172] (0xc000742580) Reply frame received for 1\nI0720 14:56:31.666405    2303 log.go:172] (0xc000742580) (0xc0008f6000) Create stream\nI0720 14:56:31.666411    2303 log.go:172] (0xc000742580) (0xc0008f6000) Stream added, broadcasting: 3\nI0720 14:56:31.667169    2303 log.go:172] (0xc000742580) Reply frame received for 3\nI0720 14:56:31.667204    2303 log.go:172] (0xc000742580) (0xc00064f900) Create stream\nI0720 14:56:31.667218    2303 log.go:172] (0xc000742580) (0xc00064f900) Stream added, broadcasting: 5\nI0720 14:56:31.667957    2303 log.go:172] (0xc000742580) Reply frame received for 5\nI0720 14:56:31.744114    2303 log.go:172] (0xc000742580) Data frame received for 3\nI0720 14:56:31.744149    2303 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0720 14:56:31.744202    2303 log.go:172] (0xc000742580) Data frame received for 5\nI0720 14:56:31.744232    2303 log.go:172] (0xc00064f900) (5) Data frame handling\nI0720 14:56:31.744246    2303 log.go:172] (0xc00064f900) (5) Data frame sent\nI0720 14:56:31.744253    2303 log.go:172] (0xc000742580) Data frame received for 5\nI0720 14:56:31.744258    2303 log.go:172] (0xc00064f900) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.169.119 80\nConnection to 10.110.169.119 80 port [tcp/http] succeeded!\nI0720 14:56:31.745715    2303 log.go:172] (0xc000742580) Data frame received for 1\nI0720 14:56:31.745737    2303 log.go:172] (0xc00064f860) (1) Data frame handling\nI0720 14:56:31.745755    2303 log.go:172] (0xc00064f860) (1) Data frame sent\nI0720 14:56:31.745786    2303 log.go:172] (0xc000742580) (0xc00064f860) Stream removed, broadcasting: 1\nI0720 14:56:31.745806    2303 log.go:172] (0xc000742580) Go away received\nI0720 14:56:31.746224    2303 log.go:172] (0xc000742580) (0xc00064f860) Stream removed, broadcasting: 1\nI0720 14:56:31.746253    2303 log.go:172] (0xc000742580) (0xc0008f6000) Stream removed, broadcasting: 3\nI0720 14:56:31.746263    2303 log.go:172] (0xc000742580) (0xc00064f900) Stream removed, broadcasting: 5\n"
Jul 20 14:56:31.750: INFO: stdout: ""
Jul 20 14:56:31.750: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:31.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4327" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:14.284 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":189,"skipped":3238,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:31.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul 20 14:56:36.987: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:37.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9413" for this suite.

• [SLOW TEST:5.353 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":190,"skipped":3269,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:37.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4349.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4349.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 14:56:51.527: INFO: DNS probes using dns-4349/dns-test-be4dc615-0d34-4e27-8c38-2e521800c4a1 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:51.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4349" for this suite.

• [SLOW TEST:14.521 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":191,"skipped":3270,"failed":0}
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:51.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:56:52.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1003" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":192,"skipped":3272,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:56:52.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[]
Jul 20 14:56:52.673: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[] (82.41565ms elapsed)
STEP: Creating pod pod1 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[pod1:[80]]
Jul 20 14:56:57.103: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[pod1:[80]] (4.423703607s elapsed)
STEP: Creating pod pod2 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[pod1:[80] pod2:[80]]
Jul 20 14:57:01.353: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[pod1:[80] pod2:[80]] (4.244323615s elapsed)
STEP: Deleting pod pod1 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[pod2:[80]]
Jul 20 14:57:02.428: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[pod2:[80]] (1.071039186s elapsed)
STEP: Deleting pod pod2 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[]
Jul 20 14:57:03.673: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[] (1.239884641s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:57:03.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2532" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.659 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":193,"skipped":3285,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:57:03.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0720 14:57:05.563093       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 14:57:05.563: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:57:05.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1878" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":194,"skipped":3317,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:57:05.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-647.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-647.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-647.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-647.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-647.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-647.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-647.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.214.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.214.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.214.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.214.10_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-647.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-647.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-647.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-647.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-647.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-647.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-647.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-647.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.214.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.214.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.214.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.214.10_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 14:57:13.060: INFO: Unable to read wheezy_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.064: INFO: Unable to read wheezy_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.067: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.071: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.127: INFO: Unable to read jessie_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.131: INFO: Unable to read jessie_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.163: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.169: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:13.201: INFO: Lookups using dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8 failed for: [wheezy_udp@dns-test-service.dns-647.svc.cluster.local wheezy_tcp@dns-test-service.dns-647.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_udp@dns-test-service.dns-647.svc.cluster.local jessie_tcp@dns-test-service.dns-647.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local]

Jul 20 14:57:18.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.211: INFO: Unable to read wheezy_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.214: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.216: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.237: INFO: Unable to read jessie_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.240: INFO: Unable to read jessie_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.242: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.245: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:18.267: INFO: Lookups using dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8 failed for: [wheezy_udp@dns-test-service.dns-647.svc.cluster.local wheezy_tcp@dns-test-service.dns-647.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_udp@dns-test-service.dns-647.svc.cluster.local jessie_tcp@dns-test-service.dns-647.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local]

Jul 20 14:57:23.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.209: INFO: Unable to read wheezy_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.212: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.215: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.251: INFO: Unable to read jessie_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.254: INFO: Unable to read jessie_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.257: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.261: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:23.302: INFO: Lookups using dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8 failed for: [wheezy_udp@dns-test-service.dns-647.svc.cluster.local wheezy_tcp@dns-test-service.dns-647.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_udp@dns-test-service.dns-647.svc.cluster.local jessie_tcp@dns-test-service.dns-647.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local]

Jul 20 14:57:28.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.210: INFO: Unable to read wheezy_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.214: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.218: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.237: INFO: Unable to read jessie_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.239: INFO: Unable to read jessie_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.241: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.244: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:28.261: INFO: Lookups using dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8 failed for: [wheezy_udp@dns-test-service.dns-647.svc.cluster.local wheezy_tcp@dns-test-service.dns-647.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_udp@dns-test-service.dns-647.svc.cluster.local jessie_tcp@dns-test-service.dns-647.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local]

Jul 20 14:57:33.207: INFO: Unable to read wheezy_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.211: INFO: Unable to read wheezy_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.215: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.218: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.259: INFO: Unable to read jessie_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.277: INFO: Unable to read jessie_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.281: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:33.300: INFO: Lookups using dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8 failed for: [wheezy_udp@dns-test-service.dns-647.svc.cluster.local wheezy_tcp@dns-test-service.dns-647.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_udp@dns-test-service.dns-647.svc.cluster.local jessie_tcp@dns-test-service.dns-647.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local]

Jul 20 14:57:38.458: INFO: Unable to read wheezy_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.462: INFO: Unable to read wheezy_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.465: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.529: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.645: INFO: Unable to read jessie_udp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.648: INFO: Unable to read jessie_tcp@dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.651: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.654: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local from pod dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8: the server could not find the requested resource (get pods dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8)
Jul 20 14:57:38.694: INFO: Lookups using dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8 failed for: [wheezy_udp@dns-test-service.dns-647.svc.cluster.local wheezy_tcp@dns-test-service.dns-647.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_udp@dns-test-service.dns-647.svc.cluster.local jessie_tcp@dns-test-service.dns-647.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-647.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-647.svc.cluster.local]

Jul 20 14:57:43.266: INFO: DNS probes using dns-647/dns-test-a75a8de2-df81-462f-9b17-0de3be71c8c8 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:57:44.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-647" for this suite.

• [SLOW TEST:38.817 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":195,"skipped":3332,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:57:44.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:57:44.676: INFO: Create a RollingUpdate DaemonSet
Jul 20 14:57:44.680: INFO: Check that daemon pods launch on every node of the cluster
Jul 20 14:57:44.724: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:57:44.823: INFO: Number of nodes with available pods: 0
Jul 20 14:57:44.823: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:57:45.828: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:57:45.832: INFO: Number of nodes with available pods: 0
Jul 20 14:57:45.832: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:57:46.920: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:57:46.924: INFO: Number of nodes with available pods: 0
Jul 20 14:57:46.924: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:57:47.828: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:57:47.831: INFO: Number of nodes with available pods: 0
Jul 20 14:57:47.831: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:57:48.827: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:57:48.830: INFO: Number of nodes with available pods: 0
Jul 20 14:57:48.830: INFO: Node kali-worker is running more than one daemon pod
Jul 20 14:57:49.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:57:49.834: INFO: Number of nodes with available pods: 2
Jul 20 14:57:49.834: INFO: Number of running nodes: 2, number of available pods: 2
Jul 20 14:57:49.835: INFO: Update the DaemonSet to trigger a rollout
Jul 20 14:57:49.869: INFO: Updating DaemonSet daemon-set
Jul 20 14:58:03.908: INFO: Roll back the DaemonSet before rollout is complete
Jul 20 14:58:03.915: INFO: Updating DaemonSet daemon-set
Jul 20 14:58:03.915: INFO: Make sure DaemonSet rollback is complete
Jul 20 14:58:03.961: INFO: Wrong image for pod: daemon-set-dv47f. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 20 14:58:03.961: INFO: Pod daemon-set-dv47f is not available
Jul 20 14:58:03.991: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:58:05.011: INFO: Wrong image for pod: daemon-set-dv47f. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 20 14:58:05.011: INFO: Pod daemon-set-dv47f is not available
Jul 20 14:58:05.015: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 14:58:06.005: INFO: Pod daemon-set-nppq5 is not available
Jul 20 14:58:06.009: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8530, will wait for the garbage collector to delete the pods
Jul 20 14:58:06.111: INFO: Deleting DaemonSet.extensions daemon-set took: 5.866281ms
Jul 20 14:58:06.511: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.282976ms
Jul 20 14:58:13.415: INFO: Number of nodes with available pods: 0
Jul 20 14:58:13.415: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 14:58:13.417: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8530/daemonsets","resourceVersion":"2746072"},"items":null}

Jul 20 14:58:13.419: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8530/pods","resourceVersion":"2746072"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:58:13.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8530" for this suite.

• [SLOW TEST:29.046 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":196,"skipped":3341,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:58:13.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:58:13.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d" in namespace "projected-3176" to be "Succeeded or Failed"
Jul 20 14:58:13.529: INFO: Pod "downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.443851ms
Jul 20 14:58:15.533: INFO: Pod "downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013500575s
Jul 20 14:58:18.118: INFO: Pod "downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.598122034s
STEP: Saw pod success
Jul 20 14:58:18.118: INFO: Pod "downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d" satisfied condition "Succeeded or Failed"
Jul 20 14:58:18.121: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d container client-container: 
STEP: delete the pod
Jul 20 14:58:18.518: INFO: Waiting for pod downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d to disappear
Jul 20 14:58:18.535: INFO: Pod downwardapi-volume-716fd87a-64d2-41ab-b69a-1ead1a979a4d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:58:18.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3176" for this suite.

• [SLOW TEST:5.108 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3395,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:58:18.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 14:58:19.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85" in namespace "downward-api-3781" to be "Succeeded or Failed"
Jul 20 14:58:19.351: INFO: Pod "downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85": Phase="Pending", Reason="", readiness=false. Elapsed: 250.468592ms
Jul 20 14:58:21.355: INFO: Pod "downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254734514s
Jul 20 14:58:23.360: INFO: Pod "downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85": Phase="Running", Reason="", readiness=true. Elapsed: 4.259390051s
Jul 20 14:58:25.365: INFO: Pod "downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264561125s
STEP: Saw pod success
Jul 20 14:58:25.365: INFO: Pod "downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85" satisfied condition "Succeeded or Failed"
Jul 20 14:58:25.368: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85 container client-container: 
STEP: delete the pod
Jul 20 14:58:25.846: INFO: Waiting for pod downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85 to disappear
Jul 20 14:58:25.931: INFO: Pod downwardapi-volume-44fab08b-252a-4a22-868d-42b339de4c85 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:58:25.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3781" for this suite.

• [SLOW TEST:7.397 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3401,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:58:25.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-vbsnz in namespace proxy-8269
I0720 14:58:26.351137       7 runners.go:190] Created replication controller with name: proxy-service-vbsnz, namespace: proxy-8269, replica count: 1
I0720 14:58:27.401878       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:58:28.402118       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:58:29.402342       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:58:30.402575       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 14:58:31.402817       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:32.403041       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:33.403226       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:34.403429       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:35.403615       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:36.403835       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:37.404060       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:38.404270       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 14:58:39.404485       7 runners.go:190] proxy-service-vbsnz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 14:58:39.422: INFO: setup took 13.342428436s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 20 14:58:39.454: INFO: (0) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 31.993564ms)
Jul 20 14:58:39.455: INFO: (0) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 31.067341ms)
Jul 20 14:58:39.455: INFO: (0) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 32.576038ms)
Jul 20 14:58:39.455: INFO: (0) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 29.137088ms)
Jul 20 14:58:39.455: INFO: (0) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 32.720446ms)
Jul 20 14:58:39.455: INFO: (0) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 32.696653ms)
Jul 20 14:58:39.455: INFO: (0) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 32.958826ms)
Jul 20 14:58:39.455: INFO: (0) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 33.116441ms)
Jul 20 14:58:39.456: INFO: (0) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 33.694961ms)
Jul 20 14:58:39.456: INFO: (0) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 32.223025ms)
Jul 20 14:58:39.457: INFO: (0) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 34.2957ms)
Jul 20 14:58:39.462: INFO: (0) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 39.620191ms)
Jul 20 14:58:39.462: INFO: (0) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 39.721243ms)
Jul 20 14:58:39.464: INFO: (0) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test (200; 7.311065ms)
Jul 20 14:58:39.472: INFO: (1) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 7.44244ms)
Jul 20 14:58:39.472: INFO: (1) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 7.613179ms)
Jul 20 14:58:39.472: INFO: (1) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 7.547944ms)
Jul 20 14:58:39.472: INFO: (1) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 7.8496ms)
Jul 20 14:58:39.482: INFO: (2) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 8.540276ms)
Jul 20 14:58:39.483: INFO: (2) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test (200; 9.463616ms)
Jul 20 14:58:39.483: INFO: (2) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 9.563685ms)
Jul 20 14:58:39.484: INFO: (2) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 9.561966ms)
Jul 20 14:58:39.484: INFO: (2) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 9.897926ms)
Jul 20 14:58:39.487: INFO: (3) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 3.01344ms)
Jul 20 14:58:39.487: INFO: (3) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 3.506523ms)
Jul 20 14:58:39.487: INFO: (3) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 3.512826ms)
Jul 20 14:58:39.488: INFO: (3) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 3.714343ms)
Jul 20 14:58:39.488: INFO: (3) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 4.360073ms)
Jul 20 14:58:39.488: INFO: (3) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 4.526182ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 5.043628ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.092243ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 5.175061ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 5.194881ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 5.137204ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 5.386427ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 5.309004ms)
Jul 20 14:58:39.489: INFO: (3) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 3.740199ms)
Jul 20 14:58:39.494: INFO: (4) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 3.735878ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 4.543541ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 4.565557ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 4.566898ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 4.664754ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 4.592598ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 4.633786ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 4.686388ms)
Jul 20 14:58:39.495: INFO: (4) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 4.668925ms)
Jul 20 14:58:39.496: INFO: (4) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 4.861402ms)
Jul 20 14:58:39.496: INFO: (4) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test<... (200; 3.969739ms)
Jul 20 14:58:39.500: INFO: (5) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 3.93092ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 5.401668ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 5.395196ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test (200; 5.502991ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.452717ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 5.610752ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 5.569028ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.500587ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 5.569077ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 5.617926ms)
Jul 20 14:58:39.501: INFO: (5) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 5.649098ms)
Jul 20 14:58:39.510: INFO: (6) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 9.03459ms)
Jul 20 14:58:39.510: INFO: (6) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 9.050625ms)
Jul 20 14:58:39.510: INFO: (6) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 9.051091ms)
Jul 20 14:58:39.511: INFO: (6) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 9.437315ms)
Jul 20 14:58:39.511: INFO: (6) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 9.459419ms)
Jul 20 14:58:39.511: INFO: (6) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 9.413018ms)
Jul 20 14:58:39.511: INFO: (6) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test<... (200; 12.380838ms)
Jul 20 14:58:39.514: INFO: (6) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 12.314509ms)
Jul 20 14:58:39.514: INFO: (6) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 12.29823ms)
Jul 20 14:58:39.514: INFO: (6) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 12.419655ms)
Jul 20 14:58:39.514: INFO: (6) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 12.504822ms)
Jul 20 14:58:39.517: INFO: (7) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 4.138694ms)
Jul 20 14:58:39.518: INFO: (7) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 4.259874ms)
Jul 20 14:58:39.518: INFO: (7) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 4.367033ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 4.605414ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 4.631801ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 4.672586ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 4.74519ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 4.670465ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 4.782926ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 4.718654ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 4.727867ms)
Jul 20 14:58:39.519: INFO: (7) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.928857ms)
Jul 20 14:58:39.522: INFO: (8) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 2.482771ms)
Jul 20 14:58:39.522: INFO: (8) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 2.587796ms)
Jul 20 14:58:39.522: INFO: (8) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 2.580266ms)
Jul 20 14:58:39.522: INFO: (8) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 2.83014ms)
Jul 20 14:58:39.548: INFO: (8) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 29.231589ms)
Jul 20 14:58:39.548: INFO: (8) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 29.418612ms)
Jul 20 14:58:39.548: INFO: (8) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 29.374134ms)
Jul 20 14:58:39.549: INFO: (8) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 29.76967ms)
Jul 20 14:58:39.549: INFO: (8) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test<... (200; 29.899012ms)
Jul 20 14:58:39.549: INFO: (8) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 29.946138ms)
Jul 20 14:58:39.549: INFO: (8) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 30.139631ms)
Jul 20 14:58:39.550: INFO: (8) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 30.469178ms)
Jul 20 14:58:39.554: INFO: (9) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 4.500961ms)
Jul 20 14:58:39.554: INFO: (9) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.644071ms)
Jul 20 14:58:39.554: INFO: (9) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.628432ms)
Jul 20 14:58:39.555: INFO: (9) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.748164ms)
Jul 20 14:58:39.555: INFO: (9) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 4.881888ms)
Jul 20 14:58:39.555: INFO: (9) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test (200; 6.183988ms)
Jul 20 14:58:39.556: INFO: (9) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 6.326477ms)
Jul 20 14:58:39.556: INFO: (9) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 6.31682ms)
Jul 20 14:58:39.556: INFO: (9) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 6.499764ms)
Jul 20 14:58:39.556: INFO: (9) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 6.479682ms)
Jul 20 14:58:39.558: INFO: (10) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 2.138278ms)
Jul 20 14:58:39.560: INFO: (10) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 3.767817ms)
Jul 20 14:58:39.560: INFO: (10) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 3.919861ms)
Jul 20 14:58:39.560: INFO: (10) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 3.836718ms)
Jul 20 14:58:39.561: INFO: (10) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 4.526275ms)
Jul 20 14:58:39.561: INFO: (10) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.871709ms)
Jul 20 14:58:39.561: INFO: (10) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.787136ms)
Jul 20 14:58:39.561: INFO: (10) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 4.799559ms)
Jul 20 14:58:39.561: INFO: (10) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 4.857416ms)
Jul 20 14:58:39.561: INFO: (10) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 4.876638ms)
Jul 20 14:58:39.561: INFO: (10) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 4.022362ms)
Jul 20 14:58:39.566: INFO: (11) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.142232ms)
Jul 20 14:58:39.567: INFO: (11) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 5.093834ms)
Jul 20 14:58:39.567: INFO: (11) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 5.22748ms)
Jul 20 14:58:39.567: INFO: (11) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.400689ms)
Jul 20 14:58:39.567: INFO: (11) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 5.415987ms)
Jul 20 14:58:39.567: INFO: (11) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 5.904447ms)
Jul 20 14:58:39.567: INFO: (11) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 6.018121ms)
Jul 20 14:58:39.568: INFO: (11) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test (200; 6.104651ms)
Jul 20 14:58:39.568: INFO: (11) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 6.03767ms)
Jul 20 14:58:39.568: INFO: (11) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 6.029399ms)
Jul 20 14:58:39.568: INFO: (11) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 6.195658ms)
Jul 20 14:58:39.568: INFO: (11) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 6.173642ms)
Jul 20 14:58:39.570: INFO: (12) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 2.541039ms)
Jul 20 14:58:39.570: INFO: (12) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 2.698473ms)
Jul 20 14:58:39.570: INFO: (12) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 2.752466ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 4.959195ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 5.49463ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 5.437623ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 5.484852ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 5.582448ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 5.527949ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 5.519021ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 5.658597ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 5.625021ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 5.605682ms)
Jul 20 14:58:39.573: INFO: (12) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 3.851147ms)
Jul 20 14:58:39.578: INFO: (13) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 3.902407ms)
Jul 20 14:58:39.578: INFO: (13) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.136039ms)
Jul 20 14:58:39.578: INFO: (13) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 4.203516ms)
Jul 20 14:58:39.579: INFO: (13) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.95773ms)
Jul 20 14:58:39.579: INFO: (13) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.936943ms)
Jul 20 14:58:39.579: INFO: (13) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 4.993762ms)
Jul 20 14:58:39.579: INFO: (13) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 5.498351ms)
Jul 20 14:58:39.580: INFO: (13) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 6.000203ms)
Jul 20 14:58:39.580: INFO: (13) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 6.084358ms)
Jul 20 14:58:39.580: INFO: (13) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 6.105639ms)
Jul 20 14:58:39.580: INFO: (13) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 6.71315ms)
Jul 20 14:58:39.580: INFO: (13) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 6.733039ms)
Jul 20 14:58:39.580: INFO: (13) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 6.830225ms)
Jul 20 14:58:39.583: INFO: (14) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 2.676433ms)
Jul 20 14:58:39.584: INFO: (14) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 3.091188ms)
Jul 20 14:58:39.584: INFO: (14) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 3.5423ms)
Jul 20 14:58:39.584: INFO: (14) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 3.653914ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.316122ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 4.433881ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 4.459811ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.407679ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 4.418609ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 4.414917ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.381915ms)
Jul 20 14:58:39.585: INFO: (14) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 3.748763ms)
Jul 20 14:58:39.589: INFO: (15) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 3.926888ms)
Jul 20 14:58:39.590: INFO: (15) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 4.310105ms)
Jul 20 14:58:39.590: INFO: (15) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 4.361336ms)
Jul 20 14:58:39.590: INFO: (15) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 4.388109ms)
Jul 20 14:58:39.590: INFO: (15) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 4.438172ms)
Jul 20 14:58:39.590: INFO: (15) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 4.4705ms)
Jul 20 14:58:39.590: INFO: (15) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 5.030787ms)
Jul 20 14:58:39.591: INFO: (15) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 5.083024ms)
Jul 20 14:58:39.591: INFO: (15) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 5.178653ms)
Jul 20 14:58:39.591: INFO: (15) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 5.11147ms)
Jul 20 14:58:39.591: INFO: (15) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.135104ms)
Jul 20 14:58:39.591: INFO: (15) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 5.051164ms)
Jul 20 14:58:39.591: INFO: (15) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test<... (200; 3.533678ms)
Jul 20 14:58:39.594: INFO: (16) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 3.533411ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 4.255642ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 4.350011ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.333039ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 4.372314ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 4.32987ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 4.631713ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 4.632053ms)
Jul 20 14:58:39.595: INFO: (16) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 4.667155ms)
Jul 20 14:58:39.598: INFO: (17) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 2.203366ms)
Jul 20 14:58:39.598: INFO: (17) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 2.270379ms)
Jul 20 14:58:39.598: INFO: (17) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 2.344367ms)
Jul 20 14:58:39.599: INFO: (17) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 3.88231ms)
Jul 20 14:58:39.599: INFO: (17) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 3.900793ms)
Jul 20 14:58:39.599: INFO: (17) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 3.969385ms)
Jul 20 14:58:39.600: INFO: (17) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 4.211093ms)
Jul 20 14:58:39.600: INFO: (17) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test<... (200; 4.613078ms)
Jul 20 14:58:39.600: INFO: (17) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 4.672074ms)
Jul 20 14:58:39.600: INFO: (17) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 4.599774ms)
Jul 20 14:58:39.600: INFO: (17) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 4.639167ms)
Jul 20 14:58:39.600: INFO: (17) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 4.652851ms)
Jul 20 14:58:39.600: INFO: (17) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 4.648016ms)
Jul 20 14:58:39.604: INFO: (18) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: test (200; 5.066674ms)
Jul 20 14:58:39.605: INFO: (18) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.126235ms)
Jul 20 14:58:39.605: INFO: (18) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 5.101281ms)
Jul 20 14:58:39.605: INFO: (18) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 5.067703ms)
Jul 20 14:58:39.605: INFO: (18) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 5.117004ms)
Jul 20 14:58:39.606: INFO: (18) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 5.098609ms)
Jul 20 14:58:39.606: INFO: (18) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:1080/proxy/: ... (200; 5.144297ms)
Jul 20 14:58:39.606: INFO: (18) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 5.306727ms)
Jul 20 14:58:39.606: INFO: (18) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 5.400622ms)
Jul 20 14:58:39.609: INFO: (19) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:460/proxy/: tls baz (200; 3.66647ms)
Jul 20 14:58:39.610: INFO: (19) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname1/proxy/: foo (200; 4.445995ms)
Jul 20 14:58:39.610: INFO: (19) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.42603ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:162/proxy/: bar (200; 4.697543ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname2/proxy/: bar (200; 4.729795ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/pods/http:proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.304166ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/services/http:proxy-service-vbsnz:portname1/proxy/: foo (200; 5.340242ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:462/proxy/: tls qux (200; 5.327552ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname1/proxy/: tls baz (200; 5.321213ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/services/https:proxy-service-vbsnz:tlsportname2/proxy/: tls qux (200; 5.4847ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/services/proxy-service-vbsnz:portname2/proxy/: bar (200; 5.463407ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:1080/proxy/: test<... (200; 5.447775ms)
Jul 20 14:58:39.611: INFO: (19) /api/v1/namespaces/proxy-8269/pods/https:proxy-service-vbsnz-898m9:443/proxy/: ... (200; 5.576281ms)
Jul 20 14:58:39.612: INFO: (19) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9/proxy/: test (200; 5.949349ms)
Jul 20 14:58:39.612: INFO: (19) /api/v1/namespaces/proxy-8269/pods/proxy-service-vbsnz-898m9:160/proxy/: foo (200; 5.882303ms)
STEP: deleting ReplicationController proxy-service-vbsnz in namespace proxy-8269, will wait for the garbage collector to delete the pods
Jul 20 14:58:39.671: INFO: Deleting ReplicationController proxy-service-vbsnz took: 7.406118ms
Jul 20 14:58:40.071: INFO: Terminating ReplicationController proxy-service-vbsnz pods took: 400.261798ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:58:53.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8269" for this suite.

• [SLOW TEST:27.639 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":199,"skipped":3422,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:58:53.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul 20 14:58:53.726: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:59:09.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9017" for this suite.

• [SLOW TEST:16.360 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":200,"skipped":3431,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:59:09.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 14:59:09.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jul 20 14:59:11.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 create -f -'
Jul 20 14:59:15.578: INFO: stderr: ""
Jul 20 14:59:15.578: INFO: stdout: "e2e-test-crd-publish-openapi-8742-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 20 14:59:15.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 delete e2e-test-crd-publish-openapi-8742-crds test-foo'
Jul 20 14:59:15.668: INFO: stderr: ""
Jul 20 14:59:15.668: INFO: stdout: "e2e-test-crd-publish-openapi-8742-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jul 20 14:59:15.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 apply -f -'
Jul 20 14:59:15.952: INFO: stderr: ""
Jul 20 14:59:15.952: INFO: stdout: "e2e-test-crd-publish-openapi-8742-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 20 14:59:15.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 delete e2e-test-crd-publish-openapi-8742-crds test-foo'
Jul 20 14:59:16.058: INFO: stderr: ""
Jul 20 14:59:16.058: INFO: stdout: "e2e-test-crd-publish-openapi-8742-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jul 20 14:59:16.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 create -f -'
Jul 20 14:59:16.290: INFO: rc: 1
Jul 20 14:59:16.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 apply -f -'
Jul 20 14:59:16.515: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jul 20 14:59:16.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 create -f -'
Jul 20 14:59:16.755: INFO: rc: 1
Jul 20 14:59:16.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6771 apply -f -'
Jul 20 14:59:17.060: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jul 20 14:59:17.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8742-crds'
Jul 20 14:59:17.291: INFO: stderr: ""
Jul 20 14:59:17.291: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8742-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jul 20 14:59:17.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8742-crds.metadata'
Jul 20 14:59:17.594: INFO: stderr: ""
Jul 20 14:59:17.594: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8742-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jul 20 14:59:17.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8742-crds.spec'
Jul 20 14:59:18.434: INFO: stderr: ""
Jul 20 14:59:18.434: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8742-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul 20 14:59:18.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8742-crds.spec.bars'
Jul 20 14:59:18.796: INFO: stderr: ""
Jul 20 14:59:18.796: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8742-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul 20 14:59:18.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8742-crds.spec.bars2'
Jul 20 14:59:19.159: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:59:22.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6771" for this suite.

• [SLOW TEST:12.261 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":201,"skipped":3433,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:59:22.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6667
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 14:59:22.289: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 20 14:59:22.378: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:59:24.417: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 14:59:26.381: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:59:28.383: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:59:30.382: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:59:32.382: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:59:34.383: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:59:36.669: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:59:38.441: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 20 14:59:40.608: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 20 14:59:40.614: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 20 14:59:48.813: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.148 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6667 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 14:59:48.813: INFO: >>> kubeConfig: /root/.kube/config
I0720 14:59:48.844135       7 log.go:172] (0xc0065004d0) (0xc000e8e6e0) Create stream
I0720 14:59:48.844167       7 log.go:172] (0xc0065004d0) (0xc000e8e6e0) Stream added, broadcasting: 1
I0720 14:59:48.847027       7 log.go:172] (0xc0065004d0) Reply frame received for 1
I0720 14:59:48.847070       7 log.go:172] (0xc0065004d0) (0xc001720a00) Create stream
I0720 14:59:48.847086       7 log.go:172] (0xc0065004d0) (0xc001720a00) Stream added, broadcasting: 3
I0720 14:59:48.848216       7 log.go:172] (0xc0065004d0) Reply frame received for 3
I0720 14:59:48.848253       7 log.go:172] (0xc0065004d0) (0xc001720b40) Create stream
I0720 14:59:48.848263       7 log.go:172] (0xc0065004d0) (0xc001720b40) Stream added, broadcasting: 5
I0720 14:59:48.849527       7 log.go:172] (0xc0065004d0) Reply frame received for 5
I0720 14:59:49.912180       7 log.go:172] (0xc0065004d0) Data frame received for 3
I0720 14:59:49.912255       7 log.go:172] (0xc001720a00) (3) Data frame handling
I0720 14:59:49.912304       7 log.go:172] (0xc001720a00) (3) Data frame sent
I0720 14:59:49.912922       7 log.go:172] (0xc0065004d0) Data frame received for 5
I0720 14:59:49.912964       7 log.go:172] (0xc001720b40) (5) Data frame handling
I0720 14:59:49.913025       7 log.go:172] (0xc0065004d0) Data frame received for 3
I0720 14:59:49.913063       7 log.go:172] (0xc001720a00) (3) Data frame handling
I0720 14:59:49.915883       7 log.go:172] (0xc0065004d0) Data frame received for 1
I0720 14:59:49.915906       7 log.go:172] (0xc000e8e6e0) (1) Data frame handling
I0720 14:59:49.915920       7 log.go:172] (0xc000e8e6e0) (1) Data frame sent
I0720 14:59:49.915934       7 log.go:172] (0xc0065004d0) (0xc000e8e6e0) Stream removed, broadcasting: 1
I0720 14:59:49.916038       7 log.go:172] (0xc0065004d0) (0xc000e8e6e0) Stream removed, broadcasting: 1
I0720 14:59:49.916052       7 log.go:172] (0xc0065004d0) (0xc001720a00) Stream removed, broadcasting: 3
I0720 14:59:49.916207       7 log.go:172] (0xc0065004d0) (0xc001720b40) Stream removed, broadcasting: 5
Jul 20 14:59:49.916: INFO: Found all expected endpoints: [netserver-0]
Jul 20 14:59:49.919: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.233 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6667 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 14:59:49.919: INFO: >>> kubeConfig: /root/.kube/config
I0720 14:59:50.008157       7 log.go:172] (0xc0069222c0) (0xc0017210e0) Create stream
I0720 14:59:50.008188       7 log.go:172] (0xc0069222c0) (0xc0017210e0) Stream added, broadcasting: 1
I0720 14:59:50.010097       7 log.go:172] (0xc0069222c0) Reply frame received for 1
I0720 14:59:50.010131       7 log.go:172] (0xc0069222c0) (0xc001721180) Create stream
I0720 14:59:50.010142       7 log.go:172] (0xc0069222c0) (0xc001721180) Stream added, broadcasting: 3
I0720 14:59:50.010972       7 log.go:172] (0xc0069222c0) Reply frame received for 3
I0720 14:59:50.010998       7 log.go:172] (0xc0069222c0) (0xc000e8e960) Create stream
I0720 14:59:50.011006       7 log.go:172] (0xc0069222c0) (0xc000e8e960) Stream added, broadcasting: 5
I0720 14:59:50.011863       7 log.go:172] (0xc0069222c0) Reply frame received for 5
I0720 14:59:51.072173       7 log.go:172] (0xc0069222c0) Data frame received for 5
I0720 14:59:51.072216       7 log.go:172] (0xc000e8e960) (5) Data frame handling
I0720 14:59:51.072240       7 log.go:172] (0xc0069222c0) Data frame received for 3
I0720 14:59:51.072248       7 log.go:172] (0xc001721180) (3) Data frame handling
I0720 14:59:51.072258       7 log.go:172] (0xc001721180) (3) Data frame sent
I0720 14:59:51.072272       7 log.go:172] (0xc0069222c0) Data frame received for 3
I0720 14:59:51.072279       7 log.go:172] (0xc001721180) (3) Data frame handling
I0720 14:59:51.074340       7 log.go:172] (0xc0069222c0) Data frame received for 1
I0720 14:59:51.074359       7 log.go:172] (0xc0017210e0) (1) Data frame handling
I0720 14:59:51.074368       7 log.go:172] (0xc0017210e0) (1) Data frame sent
I0720 14:59:51.074384       7 log.go:172] (0xc0069222c0) (0xc0017210e0) Stream removed, broadcasting: 1
I0720 14:59:51.074407       7 log.go:172] (0xc0069222c0) Go away received
I0720 14:59:51.074543       7 log.go:172] (0xc0069222c0) (0xc0017210e0) Stream removed, broadcasting: 1
I0720 14:59:51.074582       7 log.go:172] (0xc0069222c0) (0xc001721180) Stream removed, broadcasting: 3
I0720 14:59:51.074600       7 log.go:172] (0xc0069222c0) (0xc000e8e960) Stream removed, broadcasting: 5
Jul 20 14:59:51.074: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:59:51.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6667" for this suite.

• [SLOW TEST:28.881 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3437,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:59:51.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-53008688-61f5-4ba0-a752-38daf2477ba3
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 14:59:51.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4694" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":203,"skipped":3474,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 14:59:51.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3524.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3524.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3524.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3524.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 14:59:59.415: INFO: DNS probes using dns-test-2ae2de4f-bfc3-4062-ad8b-3f80f6dcb0cd succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3524.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3524.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3524.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3524.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 15:00:07.561: INFO: DNS probes using dns-test-aa9060f1-1fc9-43b0-8039-312cafa6d64d succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3524.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3524.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3524.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3524.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 15:00:16.061: INFO: DNS probes using dns-test-ce9d8694-d7d4-4b08-8af7-a308619e85c3 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:00:16.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3524" for this suite.

• [SLOW TEST:25.620 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":204,"skipped":3476,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:00:16.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 20 15:00:17.008: INFO: Waiting up to 5m0s for pod "pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c" in namespace "emptydir-686" to be "Succeeded or Failed"
Jul 20 15:00:17.012: INFO: Pod "pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.889925ms
Jul 20 15:00:19.020: INFO: Pod "pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011791201s
Jul 20 15:00:21.024: INFO: Pod "pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.016429492s
Jul 20 15:00:23.052: INFO: Pod "pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043777574s
STEP: Saw pod success
Jul 20 15:00:23.052: INFO: Pod "pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c" satisfied condition "Succeeded or Failed"
Jul 20 15:00:23.056: INFO: Trying to get logs from node kali-worker pod pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c container test-container: 
STEP: delete the pod
Jul 20 15:00:23.422: INFO: Waiting for pod pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c to disappear
Jul 20 15:00:23.427: INFO: Pod pod-f48c9d1d-735f-4bfa-bd40-ee93fca5ce7c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:00:23.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-686" for this suite.

• [SLOW TEST:6.594 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3477,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:00:23.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 20 15:00:23.601: INFO: Waiting up to 5m0s for pod "pod-49de08c9-9874-4e5f-b43e-51844b5a6bde" in namespace "emptydir-3006" to be "Succeeded or Failed"
Jul 20 15:00:23.621: INFO: Pod "pod-49de08c9-9874-4e5f-b43e-51844b5a6bde": Phase="Pending", Reason="", readiness=false. Elapsed: 19.217758ms
Jul 20 15:00:25.693: INFO: Pod "pod-49de08c9-9874-4e5f-b43e-51844b5a6bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091293507s
Jul 20 15:00:27.696: INFO: Pod "pod-49de08c9-9874-4e5f-b43e-51844b5a6bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094875946s
STEP: Saw pod success
Jul 20 15:00:27.696: INFO: Pod "pod-49de08c9-9874-4e5f-b43e-51844b5a6bde" satisfied condition "Succeeded or Failed"
Jul 20 15:00:27.699: INFO: Trying to get logs from node kali-worker pod pod-49de08c9-9874-4e5f-b43e-51844b5a6bde container test-container: 
STEP: delete the pod
Jul 20 15:00:27.733: INFO: Waiting for pod pod-49de08c9-9874-4e5f-b43e-51844b5a6bde to disappear
Jul 20 15:00:27.763: INFO: Pod pod-49de08c9-9874-4e5f-b43e-51844b5a6bde no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:00:27.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3006" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3483,"failed":0}

------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:00:27.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:00:27.878: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:00:28.048: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-35f48ec5-2de4-4fdd-b39d-9dab3517be1f" in namespace "security-context-test-9048" to be "Succeeded or Failed"
Jul 20 15:00:28.057: INFO: Pod "busybox-privileged-false-35f48ec5-2de4-4fdd-b39d-9dab3517be1f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256128ms
Jul 20 15:00:30.070: INFO: Pod "busybox-privileged-false-35f48ec5-2de4-4fdd-b39d-9dab3517be1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021765306s
Jul 20 15:00:32.074: INFO: Pod "busybox-privileged-false-35f48ec5-2de4-4fdd-b39d-9dab3517be1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026078965s
Jul 20 15:00:32.075: INFO: Pod "busybox-privileged-false-35f48ec5-2de4-4fdd-b39d-9dab3517be1f" satisfied condition "Succeeded or Failed"
Jul 20 15:00:32.093: INFO: Got logs for pod "busybox-privileged-false-35f48ec5-2de4-4fdd-b39d-9dab3517be1f": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:00:32.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9048" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3490,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:00:32.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4504
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4504
STEP: Creating statefulset with conflicting port in namespace statefulset-4504
STEP: Waiting until pod test-pod will start running in namespace statefulset-4504
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4504
Jul 20 15:00:36.521: INFO: Observed stateful pod in namespace: statefulset-4504, name: ss-0, uid: 7a1dc730-7bc3-4bf1-98c3-42a799da056d, status phase: Pending. Waiting for statefulset controller to delete.
Jul 20 15:00:36.833: INFO: Observed stateful pod in namespace: statefulset-4504, name: ss-0, uid: 7a1dc730-7bc3-4bf1-98c3-42a799da056d, status phase: Failed. Waiting for statefulset controller to delete.
Jul 20 15:00:36.843: INFO: Observed stateful pod in namespace: statefulset-4504, name: ss-0, uid: 7a1dc730-7bc3-4bf1-98c3-42a799da056d, status phase: Failed. Waiting for statefulset controller to delete.
Jul 20 15:00:36.916: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4504
STEP: Removing pod with conflicting port in namespace statefulset-4504
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4504 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 20 15:00:43.083: INFO: Deleting all statefulset in ns statefulset-4504
Jul 20 15:00:43.086: INFO: Scaling statefulset ss to 0
Jul 20 15:01:03.151: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 15:01:03.153: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:01:03.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4504" for this suite.

• [SLOW TEST:31.073 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":209,"skipped":3503,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:01:03.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 20 15:01:11.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:11.319: INFO: Pod pod-with-prestop-http-hook still exists
Jul 20 15:01:13.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:13.323: INFO: Pod pod-with-prestop-http-hook still exists
Jul 20 15:01:15.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:15.325: INFO: Pod pod-with-prestop-http-hook still exists
Jul 20 15:01:17.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:17.325: INFO: Pod pod-with-prestop-http-hook still exists
Jul 20 15:01:19.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:19.325: INFO: Pod pod-with-prestop-http-hook still exists
Jul 20 15:01:21.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:21.325: INFO: Pod pod-with-prestop-http-hook still exists
Jul 20 15:01:23.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:23.325: INFO: Pod pod-with-prestop-http-hook still exists
Jul 20 15:01:25.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 20 15:01:25.324: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:01:25.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5009" for this suite.

• [SLOW TEST:22.184 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3534,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:01:25.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-64c5e239-538e-4333-8197-0e20e7d53ace in namespace container-probe-3079
Jul 20 15:01:29.446: INFO: Started pod busybox-64c5e239-538e-4333-8197-0e20e7d53ace in namespace container-probe-3079
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 15:01:29.450: INFO: Initial restart count of pod busybox-64c5e239-538e-4333-8197-0e20e7d53ace is 0
Jul 20 15:02:23.658: INFO: Restart count of pod container-probe-3079/busybox-64c5e239-538e-4333-8197-0e20e7d53ace is now 1 (54.208884145s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:02:23.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3079" for this suite.

• [SLOW TEST:58.582 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3551,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:02:23.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-419e6aba-26e6-4853-b706-47a319cc7fb2
STEP: Creating configMap with name cm-test-opt-upd-2f57f0da-a5af-4bc3-ab1a-2b3bea3508b2
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-419e6aba-26e6-4853-b706-47a319cc7fb2
STEP: Updating configmap cm-test-opt-upd-2f57f0da-a5af-4bc3-ab1a-2b3bea3508b2
STEP: Creating configMap with name cm-test-opt-create-9e6a2331-9102-415b-a61b-de23c8d805eb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:02:36.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5511" for this suite.

• [SLOW TEST:12.787 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3572,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:02:36.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 15:02:37.705: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 15:02:39.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 15:02:41.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854157, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 15:02:44.832: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:02:44.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6327-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:02:46.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7584" for this suite.
STEP: Destroying namespace "webhook-7584-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.432 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":213,"skipped":3584,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:02:46.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 20 15:02:50.830: INFO: Successfully updated pod "labelsupdate9a74612c-cfa5-4639-b5b9-003d3b20b295"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:02:54.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7438" for this suite.

• [SLOW TEST:8.745 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3585,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:02:54.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-68368857-2ef4-4b21-904b-13c2f2c94fdb
STEP: Creating a pod to test consume secrets
Jul 20 15:02:54.964: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa" in namespace "projected-7009" to be "Succeeded or Failed"
Jul 20 15:02:54.968: INFO: Pod "pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258236ms
Jul 20 15:02:56.986: INFO: Pod "pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021445716s
Jul 20 15:02:58.989: INFO: Pod "pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025389984s
STEP: Saw pod success
Jul 20 15:02:58.990: INFO: Pod "pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa" satisfied condition "Succeeded or Failed"
Jul 20 15:02:58.992: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 15:02:59.019: INFO: Waiting for pod pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa to disappear
Jul 20 15:02:59.041: INFO: Pod pod-projected-secrets-fa887c94-ace1-47b9-a72f-58ef72884aaa no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:02:59.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7009" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3590,"failed":0}
SS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:02:59.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:02:59.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9298" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":216,"skipped":3592,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:02:59.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-7c689676-4e57-4433-8d7c-7acb6d068100
STEP: Creating a pod to test consume configMaps
Jul 20 15:02:59.228: INFO: Waiting up to 5m0s for pod "pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc" in namespace "configmap-5788" to be "Succeeded or Failed"
Jul 20 15:02:59.246: INFO: Pod "pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.319709ms
Jul 20 15:03:01.250: INFO: Pod "pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022220583s
Jul 20 15:03:03.254: INFO: Pod "pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc": Phase="Running", Reason="", readiness=true. Elapsed: 4.026276493s
Jul 20 15:03:05.275: INFO: Pod "pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047084387s
STEP: Saw pod success
Jul 20 15:03:05.275: INFO: Pod "pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc" satisfied condition "Succeeded or Failed"
Jul 20 15:03:05.298: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc container configmap-volume-test: 
STEP: delete the pod
Jul 20 15:03:05.341: INFO: Waiting for pod pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc to disappear
Jul 20 15:03:05.361: INFO: Pod pod-configmaps-639a01e1-9342-464c-8056-06548250f6dc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:03:05.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5788" for this suite.

• [SLOW TEST:6.237 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3602,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:03:05.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-c62f0609-0ec6-47e6-8fc7-ba42d404b5c9
STEP: Creating a pod to test consume configMaps
Jul 20 15:03:05.478: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d" in namespace "projected-3615" to be "Succeeded or Failed"
Jul 20 15:03:05.500: INFO: Pod "pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.278547ms
Jul 20 15:03:07.504: INFO: Pod "pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025636832s
Jul 20 15:03:09.507: INFO: Pod "pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029175408s
Jul 20 15:03:11.511: INFO: Pod "pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032934132s
STEP: Saw pod success
Jul 20 15:03:11.511: INFO: Pod "pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d" satisfied condition "Succeeded or Failed"
Jul 20 15:03:11.515: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 15:03:11.666: INFO: Waiting for pod pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d to disappear
Jul 20 15:03:11.700: INFO: Pod pod-projected-configmaps-19b38dd1-fa13-4d89-9960-ec89972d636d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:03:11.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3615" for this suite.

• [SLOW TEST:6.369 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3614,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:03:11.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-765.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-765.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-765.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-765.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-765.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-765.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 15:03:19.920: INFO: DNS probes using dns-765/dns-test-fb7daee9-3892-4f1a-bf63-0c8cc5b7324c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:03:19.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-765" for this suite.

• [SLOW TEST:8.302 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":219,"skipped":3634,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:03:20.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:03:51.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2407" for this suite.
STEP: Destroying namespace "nsdeletetest-9743" for this suite.
Jul 20 15:03:51.698: INFO: Namespace nsdeletetest-9743 was already deleted
STEP: Destroying namespace "nsdeletetest-2039" for this suite.

• [SLOW TEST:31.660 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":220,"skipped":3638,"failed":0}
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:03:51.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-8925ca47-c916-4671-abdb-5922f2982a6b
STEP: Creating secret with name s-test-opt-upd-3cc948c6-2fe2-4081-9af2-1e5b46aa68da
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-8925ca47-c916-4671-abdb-5922f2982a6b
STEP: Updating secret s-test-opt-upd-3cc948c6-2fe2-4081-9af2-1e5b46aa68da
STEP: Creating secret with name s-test-opt-create-3e50bf8b-075f-4e10-a6b2-28437b02667b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:04:02.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3517" for this suite.

• [SLOW TEST:10.533 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3638,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:04:02.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Jul 20 15:04:02.517: INFO: Waiting up to 5m0s for pod "client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e" in namespace "containers-9460" to be "Succeeded or Failed"
Jul 20 15:04:02.604: INFO: Pod "client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 86.508474ms
Jul 20 15:04:04.607: INFO: Pod "client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089999196s
Jul 20 15:04:06.612: INFO: Pod "client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e": Phase="Running", Reason="", readiness=true. Elapsed: 4.094493434s
Jul 20 15:04:08.615: INFO: Pod "client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098110414s
STEP: Saw pod success
Jul 20 15:04:08.615: INFO: Pod "client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e" satisfied condition "Succeeded or Failed"
Jul 20 15:04:08.618: INFO: Trying to get logs from node kali-worker2 pod client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e container test-container: 
STEP: delete the pod
Jul 20 15:04:08.946: INFO: Waiting for pod client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e to disappear
Jul 20 15:04:09.040: INFO: Pod client-containers-91e53a61-c07a-43cb-a7af-604e96017e7e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:04:09.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9460" for this suite.

• [SLOW TEST:6.937 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3668,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:04:09.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-60da5ac3-55b5-4cd7-822a-a2d2b88f1937
STEP: Creating a pod to test consume secrets
Jul 20 15:04:09.817: INFO: Waiting up to 5m0s for pod "pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422" in namespace "secrets-8426" to be "Succeeded or Failed"
Jul 20 15:04:09.867: INFO: Pod "pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422": Phase="Pending", Reason="", readiness=false. Elapsed: 50.34717ms
Jul 20 15:04:11.879: INFO: Pod "pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061913504s
Jul 20 15:04:13.882: INFO: Pod "pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064659519s
Jul 20 15:04:15.885: INFO: Pod "pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06798444s
STEP: Saw pod success
Jul 20 15:04:15.885: INFO: Pod "pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422" satisfied condition "Succeeded or Failed"
Jul 20 15:04:15.887: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422 container secret-volume-test: 
STEP: delete the pod
Jul 20 15:04:15.906: INFO: Waiting for pod pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422 to disappear
Jul 20 15:04:15.923: INFO: Pod pod-secrets-3fd55554-f84c-4265-8524-11f11e5ff422 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:04:15.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8426" for this suite.

• [SLOW TEST:6.759 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3692,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:04:15.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:04:27.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5171" for this suite.

• [SLOW TEST:11.258 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":224,"skipped":3696,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:04:27.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:04:27.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 20 15:04:29.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5061 create -f -'
Jul 20 15:04:32.605: INFO: stderr: ""
Jul 20 15:04:32.605: INFO: stdout: "e2e-test-crd-publish-openapi-6342-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul 20 15:04:32.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5061 delete e2e-test-crd-publish-openapi-6342-crds test-cr'
Jul 20 15:04:32.697: INFO: stderr: ""
Jul 20 15:04:32.697: INFO: stdout: "e2e-test-crd-publish-openapi-6342-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jul 20 15:04:32.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5061 apply -f -'
Jul 20 15:04:32.955: INFO: stderr: ""
Jul 20 15:04:32.955: INFO: stdout: "e2e-test-crd-publish-openapi-6342-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul 20 15:04:32.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5061 delete e2e-test-crd-publish-openapi-6342-crds test-cr'
Jul 20 15:04:33.087: INFO: stderr: ""
Jul 20 15:04:33.087: INFO: stdout: "e2e-test-crd-publish-openapi-6342-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 20 15:04:33.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6342-crds'
Jul 20 15:04:33.291: INFO: stderr: ""
Jul 20 15:04:33.291: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6342-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:04:36.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5061" for this suite.

• [SLOW TEST:9.018 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":225,"skipped":3697,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:04:36.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-26b74d4a-691d-430b-b6ea-fa5c30374d69 in namespace container-probe-696
Jul 20 15:04:42.284: INFO: Started pod busybox-26b74d4a-691d-430b-b6ea-fa5c30374d69 in namespace container-probe-696
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 15:04:42.287: INFO: Initial restart count of pod busybox-26b74d4a-691d-430b-b6ea-fa5c30374d69 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:08:43.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-696" for this suite.

• [SLOW TEST:247.613 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3701,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:08:43.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Jul 20 15:08:43.933: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jul 20 15:08:43.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152'
Jul 20 15:08:44.381: INFO: stderr: ""
Jul 20 15:08:44.381: INFO: stdout: "service/agnhost-slave created\n"
Jul 20 15:08:44.381: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jul 20 15:08:44.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152'
Jul 20 15:08:44.675: INFO: stderr: ""
Jul 20 15:08:44.675: INFO: stdout: "service/agnhost-master created\n"
Jul 20 15:08:44.675: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul 20 15:08:44.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152'
Jul 20 15:08:45.009: INFO: stderr: ""
Jul 20 15:08:45.009: INFO: stdout: "service/frontend created\n"
Jul 20 15:08:45.010: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jul 20 15:08:45.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152'
Jul 20 15:08:45.274: INFO: stderr: ""
Jul 20 15:08:45.274: INFO: stdout: "deployment.apps/frontend created\n"
Jul 20 15:08:45.274: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 20 15:08:45.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152'
Jul 20 15:08:45.569: INFO: stderr: ""
Jul 20 15:08:45.569: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jul 20 15:08:45.569: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 20 15:08:45.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152'
Jul 20 15:08:45.868: INFO: stderr: ""
Jul 20 15:08:45.868: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jul 20 15:08:45.868: INFO: Waiting for all frontend pods to be Running.
Jul 20 15:08:55.918: INFO: Waiting for frontend to serve content.
Jul 20 15:08:55.997: INFO: Trying to add a new entry to the guestbook.
Jul 20 15:08:56.008: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul 20 15:08:56.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4152'
Jul 20 15:08:56.331: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 15:08:56.331: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 15:08:56.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4152'
Jul 20 15:08:56.488: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 15:08:56.488: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 15:08:56.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4152'
Jul 20 15:08:57.294: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 15:08:57.294: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 15:08:57.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4152'
Jul 20 15:08:57.611: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 15:08:57.611: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 15:08:57.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4152'
Jul 20 15:08:58.079: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 15:08:58.079: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 15:08:58.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4152'
Jul 20 15:08:58.310: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 15:08:58.310: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:08:58.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4152" for this suite.

• [SLOW TEST:14.864 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":227,"skipped":3715,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:08:58.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-a21207eb-2088-497b-8db8-ee7cb8196cf0
STEP: Creating a pod to test consume secrets
Jul 20 15:08:59.310: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89" in namespace "projected-5420" to be "Succeeded or Failed"
Jul 20 15:08:59.345: INFO: Pod "pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89": Phase="Pending", Reason="", readiness=false. Elapsed: 35.076649ms
Jul 20 15:09:01.395: INFO: Pod "pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085805057s
Jul 20 15:09:03.400: INFO: Pod "pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0902221s
Jul 20 15:09:05.405: INFO: Pod "pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094969684s
STEP: Saw pod success
Jul 20 15:09:05.405: INFO: Pod "pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89" satisfied condition "Succeeded or Failed"
Jul 20 15:09:05.408: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89 container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 15:09:05.496: INFO: Waiting for pod pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89 to disappear
Jul 20 15:09:05.503: INFO: Pod pod-projected-secrets-704c47ed-ea2f-4ca8-88d1-6f8fa9938d89 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:09:05.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5420" for this suite.

• [SLOW TEST:6.848 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3781,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:09:05.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-9d3cecf4-c865-4c2e-9b5a-1656249b26f1
STEP: Creating a pod to test consume secrets
Jul 20 15:09:05.600: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a" in namespace "projected-7127" to be "Succeeded or Failed"
Jul 20 15:09:05.605: INFO: Pod "pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.879558ms
Jul 20 15:09:07.610: INFO: Pod "pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009216342s
Jul 20 15:09:09.614: INFO: Pod "pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013479463s
STEP: Saw pod success
Jul 20 15:09:09.614: INFO: Pod "pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a" satisfied condition "Succeeded or Failed"
Jul 20 15:09:09.617: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 15:09:09.647: INFO: Waiting for pod pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a to disappear
Jul 20 15:09:09.653: INFO: Pod pod-projected-secrets-48a1426e-3478-4c49-aa8d-584678e1659a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:09:09.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7127" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3782,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:09:09.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 15:09:10.362: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 15:09:12.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 15:09:14.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854550, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 15:09:17.829: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:09:30.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9949" for this suite.
STEP: Destroying namespace "webhook-9949-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.001 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":230,"skipped":3861,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:09:30.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:09:35.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8012" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":231,"skipped":3863,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:09:35.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 15:09:36.723: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 15:09:38.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 15:09:40.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854576, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 15:09:43.997: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:09:44.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8421" for this suite.
STEP: Destroying namespace "webhook-8421-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.194 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":232,"skipped":3893,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:09:44.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:09:50.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9423" for this suite.

• [SLOW TEST:5.663 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":233,"skipped":3901,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:09:50.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-d4v4
STEP: Creating a pod to test atomic-volume-subpath
Jul 20 15:09:50.365: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d4v4" in namespace "subpath-8783" to be "Succeeded or Failed"
Jul 20 15:09:50.386: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.452637ms
Jul 20 15:09:52.545: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179178289s
Jul 20 15:09:54.549: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 4.183176399s
Jul 20 15:09:56.553: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 6.187237413s
Jul 20 15:09:58.557: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 8.191477671s
Jul 20 15:10:00.561: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 10.19604515s
Jul 20 15:10:02.566: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 12.200456703s
Jul 20 15:10:04.571: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 14.205182751s
Jul 20 15:10:06.575: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 16.209534088s
Jul 20 15:10:08.580: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 18.214167308s
Jul 20 15:10:10.601: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 20.23588589s
Jul 20 15:10:12.637: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Running", Reason="", readiness=true. Elapsed: 22.271907744s
Jul 20 15:10:14.641: INFO: Pod "pod-subpath-test-configmap-d4v4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.275667645s
STEP: Saw pod success
Jul 20 15:10:14.641: INFO: Pod "pod-subpath-test-configmap-d4v4" satisfied condition "Succeeded or Failed"
Jul 20 15:10:14.643: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-d4v4 container test-container-subpath-configmap-d4v4: 
STEP: delete the pod
Jul 20 15:10:15.093: INFO: Waiting for pod pod-subpath-test-configmap-d4v4 to disappear
Jul 20 15:10:15.095: INFO: Pod pod-subpath-test-configmap-d4v4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-d4v4
Jul 20 15:10:15.095: INFO: Deleting pod "pod-subpath-test-configmap-d4v4" in namespace "subpath-8783"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:10:15.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8783" for this suite.

• [SLOW TEST:24.979 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":234,"skipped":3905,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:10:15.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 20 15:10:23.647: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 15:10:23.663: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 15:10:25.663: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 15:10:25.758: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 15:10:27.663: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 15:10:27.668: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 15:10:29.663: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 15:10:29.667: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:10:29.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5210" for this suite.

• [SLOW TEST:14.456 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3923,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:10:29.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9004.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9004.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9004.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9004.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9004.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9004.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 15:10:35.907: INFO: DNS probes using dns-9004/dns-test-435fa6ad-9b71-4bff-857b-3742dfd17063 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:10:36.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9004" for this suite.

• [SLOW TEST:6.894 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":236,"skipped":3929,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:10:36.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 20 15:10:36.769: INFO: Waiting up to 5m0s for pod "downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36" in namespace "downward-api-9834" to be "Succeeded or Failed"
Jul 20 15:10:36.777: INFO: Pod "downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36": Phase="Pending", Reason="", readiness=false. Elapsed: 7.960701ms
Jul 20 15:10:38.871: INFO: Pod "downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10212676s
Jul 20 15:10:40.876: INFO: Pod "downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106785804s
Jul 20 15:10:42.880: INFO: Pod "downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111312519s
STEP: Saw pod success
Jul 20 15:10:42.880: INFO: Pod "downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36" satisfied condition "Succeeded or Failed"
Jul 20 15:10:42.884: INFO: Trying to get logs from node kali-worker pod downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36 container dapi-container: 
STEP: delete the pod
Jul 20 15:10:42.904: INFO: Waiting for pod downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36 to disappear
Jul 20 15:10:42.908: INFO: Pod downward-api-0d65d3a1-6a00-42c1-9c36-c49fca7a3f36 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:10:42.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9834" for this suite.

• [SLOW TEST:6.345 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3932,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:10:42.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Jul 20 15:10:43.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-8211 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jul 20 15:10:43.141: INFO: stderr: ""
Jul 20 15:10:43.141: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Jul 20 15:10:43.141: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jul 20 15:10:43.141: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8211" to be "running and ready, or succeeded"
Jul 20 15:10:43.154: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.481425ms
Jul 20 15:10:45.159: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017713545s
Jul 20 15:10:47.162: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.02121369s
Jul 20 15:10:47.162: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jul 20 15:10:47.162: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jul 20 15:10:47.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211'
Jul 20 15:10:47.279: INFO: stderr: ""
Jul 20 15:10:47.279: INFO: stdout: "I0720 15:10:45.515916       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/bw9q 546\nI0720 15:10:45.716074       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/jrbd 255\nI0720 15:10:45.916056       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/527 393\nI0720 15:10:46.116148       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/b2f 469\nI0720 15:10:46.316114       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/tccq 223\nI0720 15:10:46.516143       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/sbc 557\nI0720 15:10:46.716089       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/rqqz 550\nI0720 15:10:46.916118       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6z8d 278\nI0720 15:10:47.116166       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/tx6t 451\n"
STEP: limiting log lines
Jul 20 15:10:47.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --tail=1'
Jul 20 15:10:47.404: INFO: stderr: ""
Jul 20 15:10:47.404: INFO: stdout: "I0720 15:10:47.316060       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/284h 548\n"
Jul 20 15:10:47.404: INFO: got output "I0720 15:10:47.316060       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/284h 548\n"
STEP: limiting log bytes
Jul 20 15:10:47.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --limit-bytes=1'
Jul 20 15:10:47.560: INFO: stderr: ""
Jul 20 15:10:47.560: INFO: stdout: "I"
Jul 20 15:10:47.560: INFO: got output "I"
STEP: exposing timestamps
Jul 20 15:10:47.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --tail=1 --timestamps'
Jul 20 15:10:47.662: INFO: stderr: ""
Jul 20 15:10:47.662: INFO: stdout: "2020-07-20T15:10:47.516196871Z I0720 15:10:47.516043       1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/bg5n 550\n"
Jul 20 15:10:47.662: INFO: got output "2020-07-20T15:10:47.516196871Z I0720 15:10:47.516043       1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/bg5n 550\n"
STEP: restricting to a time range
Jul 20 15:10:50.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --since=1s'
Jul 20 15:10:50.279: INFO: stderr: ""
Jul 20 15:10:50.279: INFO: stdout: "I0720 15:10:49.316111       1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/hs8 543\nI0720 15:10:49.516162       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/cvr 384\nI0720 15:10:49.716102       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/4bdz 541\nI0720 15:10:49.916123       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/29n 500\nI0720 15:10:50.116147       1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/mc4 351\n"
Jul 20 15:10:50.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8211 --since=24h'
Jul 20 15:10:50.390: INFO: stderr: ""
Jul 20 15:10:50.391: INFO: stdout: "I0720 15:10:45.515916       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/bw9q 546\nI0720 15:10:45.716074       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/jrbd 255\nI0720 15:10:45.916056       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/527 393\nI0720 15:10:46.116148       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/b2f 469\nI0720 15:10:46.316114       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/tccq 223\nI0720 15:10:46.516143       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/sbc 557\nI0720 15:10:46.716089       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/rqqz 550\nI0720 15:10:46.916118       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6z8d 278\nI0720 15:10:47.116166       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/tx6t 451\nI0720 15:10:47.316060       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/284h 548\nI0720 15:10:47.516043       1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/bg5n 550\nI0720 15:10:47.716126       1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/vn7 235\nI0720 15:10:47.916095       1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/wwj2 436\nI0720 15:10:48.116071       1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/zfjs 428\nI0720 15:10:48.316133       1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/dn2 357\nI0720 15:10:48.516071       1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/dmg 368\nI0720 15:10:48.716093       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/9ntg 453\nI0720 15:10:48.916100       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/tkr 598\nI0720 15:10:49.116076       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/hf2 539\nI0720 15:10:49.316111       1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/hs8 543\nI0720 15:10:49.516162       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/cvr 384\nI0720 15:10:49.716102       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/4bdz 541\nI0720 15:10:49.916123       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/29n 500\nI0720 15:10:50.116147       1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/mc4 351\nI0720 15:10:50.316061       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/bh55 206\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Jul 20 15:10:50.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8211'
Jul 20 15:11:03.300: INFO: stderr: ""
Jul 20 15:11:03.301: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:11:03.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8211" for this suite.

• [SLOW TEST:20.426 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":238,"skipped":3940,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:11:03.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:11:03.402: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul 20 15:11:03.436: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul 20 15:11:08.500: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 20 15:11:08.500: INFO: Creating deployment "test-rolling-update-deployment"
Jul 20 15:11:08.504: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul 20 15:11:08.520: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul 20 15:11:10.528: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul 20 15:11:10.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854668, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854668, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854668, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854668, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 15:11:12.692: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 20 15:11:12.721: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-4978 /apis/apps/v1/namespaces/deployment-4978/deployments/test-rolling-update-deployment fd33c008-e27b-4fbd-861c-4de811372e8c 2750219 1 2020-07-20 15:11:08 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-07-20 15:11:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-20 15:11:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002db9a78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-20 15:11:08 +0000 UTC,LastTransitionTime:2020-07-20 15:11:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-07-20 15:11:12 +0000 UTC,LastTransitionTime:2020-07-20 15:11:08 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul 20 15:11:12.725: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-4978 /apis/apps/v1/namespaces/deployment-4978/replicasets/test-rolling-update-deployment-59d5cb45c7 3d57fdc7-aeec-4002-a12d-8c8028fd5fde 2750208 1 2020-07-20 15:11:08 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment fd33c008-e27b-4fbd-861c-4de811372e8c 0xc0044fa027 0xc0044fa028}] []  [{kube-controller-manager Update apps/v1 2020-07-20 15:11:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 100 51 51 99 48 48 56 45 101 50 55 98 45 52 102 98 100 45 56 54 49 99 45 52 100 101 56 49 49 51 55 50 101 56 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044fa0b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 20 15:11:12.725: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul 20 15:11:12.725: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-4978 /apis/apps/v1/namespaces/deployment-4978/replicasets/test-rolling-update-controller 9f5d3ab1-969a-4bfd-ac00-863f4a7c493a 2750218 2 2020-07-20 15:11:03 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment fd33c008-e27b-4fbd-861c-4de811372e8c 0xc0062bdf17 0xc0062bdf18}] []  [{e2e.test Update apps/v1 2020-07-20 15:11:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-20 15:11:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 100 51 51 99 48 48 56 45 101 50 55 98 45 52 102 98 100 45 56 54 49 99 45 52 100 101 56 49 49 51 55 50 101 56 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0062bdfb8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 15:11:12.729: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-dzcdj" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-dzcdj test-rolling-update-deployment-59d5cb45c7- deployment-4978 /api/v1/namespaces/deployment-4978/pods/test-rolling-update-deployment-59d5cb45c7-dzcdj 75573885-0cd8-4dbc-b159-bb9d8bd236a6 2750207 0 2020-07-20 15:11:08 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 3d57fdc7-aeec-4002-a12d-8c8028fd5fde 0xc0044fa587 0xc0044fa588}] []  [{kube-controller-manager Update v1 2020-07-20 15:11:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 100 53 55 102 100 99 55 45 97 101 101 99 45 52 48 48 50 45 97 49 50 100 45 56 99 56 48 50 56 102 100 53 102 100 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 15:11:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c7ntl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c7ntl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c7ntl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:11:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:11:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:11:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:11:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.168,StartTime:2020-07-20 15:11:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 15:11:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://c56f79eb064d1e093e8231d9ea57cf0f5816974a574de2e448f503b290db08b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.168,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:11:12.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4978" for this suite.

• [SLOW TEST:9.394 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":239,"skipped":3971,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:11:12.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jul 20 15:11:12.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jul 20 15:11:24.502: INFO: >>> kubeConfig: /root/.kube/config
Jul 20 15:11:26.444: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:11:38.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3683" for this suite.

• [SLOW TEST:25.316 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":240,"skipped":3988,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:11:38.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 20 15:11:38.141: INFO: Waiting up to 5m0s for pod "pod-289b0e92-e7c5-42ba-b9a6-bff892449813" in namespace "emptydir-3231" to be "Succeeded or Failed"
Jul 20 15:11:38.150: INFO: Pod "pod-289b0e92-e7c5-42ba-b9a6-bff892449813": Phase="Pending", Reason="", readiness=false. Elapsed: 8.886912ms
Jul 20 15:11:40.154: INFO: Pod "pod-289b0e92-e7c5-42ba-b9a6-bff892449813": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013363862s
Jul 20 15:11:42.158: INFO: Pod "pod-289b0e92-e7c5-42ba-b9a6-bff892449813": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017085078s
STEP: Saw pod success
Jul 20 15:11:42.158: INFO: Pod "pod-289b0e92-e7c5-42ba-b9a6-bff892449813" satisfied condition "Succeeded or Failed"
Jul 20 15:11:42.161: INFO: Trying to get logs from node kali-worker pod pod-289b0e92-e7c5-42ba-b9a6-bff892449813 container test-container: 
STEP: delete the pod
Jul 20 15:11:42.185: INFO: Waiting for pod pod-289b0e92-e7c5-42ba-b9a6-bff892449813 to disappear
Jul 20 15:11:42.224: INFO: Pod pod-289b0e92-e7c5-42ba-b9a6-bff892449813 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:11:42.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3231" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":3989,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:11:42.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul 20 15:11:52.451: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:52.451: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:52.490539       7 log.go:172] (0xc002d842c0) (0xc002016000) Create stream
I0720 15:11:52.490575       7 log.go:172] (0xc002d842c0) (0xc002016000) Stream added, broadcasting: 1
I0720 15:11:52.492938       7 log.go:172] (0xc002d842c0) Reply frame received for 1
I0720 15:11:52.492975       7 log.go:172] (0xc002d842c0) (0xc00157d860) Create stream
I0720 15:11:52.492992       7 log.go:172] (0xc002d842c0) (0xc00157d860) Stream added, broadcasting: 3
I0720 15:11:52.494023       7 log.go:172] (0xc002d842c0) Reply frame received for 3
I0720 15:11:52.494072       7 log.go:172] (0xc002d842c0) (0xc00157d900) Create stream
I0720 15:11:52.494083       7 log.go:172] (0xc002d842c0) (0xc00157d900) Stream added, broadcasting: 5
I0720 15:11:52.494924       7 log.go:172] (0xc002d842c0) Reply frame received for 5
I0720 15:11:52.543304       7 log.go:172] (0xc002d842c0) Data frame received for 3
I0720 15:11:52.543324       7 log.go:172] (0xc00157d860) (3) Data frame handling
I0720 15:11:52.543332       7 log.go:172] (0xc00157d860) (3) Data frame sent
I0720 15:11:52.543337       7 log.go:172] (0xc002d842c0) Data frame received for 3
I0720 15:11:52.543341       7 log.go:172] (0xc00157d860) (3) Data frame handling
I0720 15:11:52.543361       7 log.go:172] (0xc002d842c0) Data frame received for 5
I0720 15:11:52.543420       7 log.go:172] (0xc00157d900) (5) Data frame handling
I0720 15:11:52.544836       7 log.go:172] (0xc002d842c0) Data frame received for 1
I0720 15:11:52.544855       7 log.go:172] (0xc002016000) (1) Data frame handling
I0720 15:11:52.544865       7 log.go:172] (0xc002016000) (1) Data frame sent
I0720 15:11:52.544876       7 log.go:172] (0xc002d842c0) (0xc002016000) Stream removed, broadcasting: 1
I0720 15:11:52.544912       7 log.go:172] (0xc002d842c0) Go away received
I0720 15:11:52.544947       7 log.go:172] (0xc002d842c0) (0xc002016000) Stream removed, broadcasting: 1
I0720 15:11:52.544980       7 log.go:172] (0xc002d842c0) (0xc00157d860) Stream removed, broadcasting: 3
I0720 15:11:52.545005       7 log.go:172] (0xc002d842c0) (0xc00157d900) Stream removed, broadcasting: 5
Jul 20 15:11:52.545: INFO: Exec stderr: ""
Jul 20 15:11:52.545: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:52.545: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:52.573436       7 log.go:172] (0xc000d322c0) (0xc000cee320) Create stream
I0720 15:11:52.573467       7 log.go:172] (0xc000d322c0) (0xc000cee320) Stream added, broadcasting: 1
I0720 15:11:52.577330       7 log.go:172] (0xc000d322c0) Reply frame received for 1
I0720 15:11:52.577386       7 log.go:172] (0xc000d322c0) (0xc001f4f0e0) Create stream
I0720 15:11:52.577403       7 log.go:172] (0xc000d322c0) (0xc001f4f0e0) Stream added, broadcasting: 3
I0720 15:11:52.578841       7 log.go:172] (0xc000d322c0) Reply frame received for 3
I0720 15:11:52.578894       7 log.go:172] (0xc000d322c0) (0xc000cee640) Create stream
I0720 15:11:52.578920       7 log.go:172] (0xc000d322c0) (0xc000cee640) Stream added, broadcasting: 5
I0720 15:11:52.580153       7 log.go:172] (0xc000d322c0) Reply frame received for 5
I0720 15:11:52.633238       7 log.go:172] (0xc000d322c0) Data frame received for 5
I0720 15:11:52.633260       7 log.go:172] (0xc000cee640) (5) Data frame handling
I0720 15:11:52.633283       7 log.go:172] (0xc000d322c0) Data frame received for 3
I0720 15:11:52.633288       7 log.go:172] (0xc001f4f0e0) (3) Data frame handling
I0720 15:11:52.633295       7 log.go:172] (0xc001f4f0e0) (3) Data frame sent
I0720 15:11:52.633306       7 log.go:172] (0xc000d322c0) Data frame received for 3
I0720 15:11:52.633310       7 log.go:172] (0xc001f4f0e0) (3) Data frame handling
I0720 15:11:52.634948       7 log.go:172] (0xc000d322c0) Data frame received for 1
I0720 15:11:52.634968       7 log.go:172] (0xc000cee320) (1) Data frame handling
I0720 15:11:52.634980       7 log.go:172] (0xc000cee320) (1) Data frame sent
I0720 15:11:52.635124       7 log.go:172] (0xc000d322c0) (0xc000cee320) Stream removed, broadcasting: 1
I0720 15:11:52.635149       7 log.go:172] (0xc000d322c0) Go away received
I0720 15:11:52.635229       7 log.go:172] (0xc000d322c0) (0xc000cee320) Stream removed, broadcasting: 1
I0720 15:11:52.635267       7 log.go:172] (0xc000d322c0) (0xc001f4f0e0) Stream removed, broadcasting: 3
I0720 15:11:52.635292       7 log.go:172] (0xc000d322c0) (0xc000cee640) Stream removed, broadcasting: 5
Jul 20 15:11:52.635: INFO: Exec stderr: ""
Jul 20 15:11:52.635: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:52.635: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:52.663710       7 log.go:172] (0xc0025c6000) (0xc00157de00) Create stream
I0720 15:11:52.663740       7 log.go:172] (0xc0025c6000) (0xc00157de00) Stream added, broadcasting: 1
I0720 15:11:52.665585       7 log.go:172] (0xc0025c6000) Reply frame received for 1
I0720 15:11:52.665616       7 log.go:172] (0xc0025c6000) (0xc0020163c0) Create stream
I0720 15:11:52.665628       7 log.go:172] (0xc0025c6000) (0xc0020163c0) Stream added, broadcasting: 3
I0720 15:11:52.666347       7 log.go:172] (0xc0025c6000) Reply frame received for 3
I0720 15:11:52.666369       7 log.go:172] (0xc0025c6000) (0xc00157dea0) Create stream
I0720 15:11:52.666381       7 log.go:172] (0xc0025c6000) (0xc00157dea0) Stream added, broadcasting: 5
I0720 15:11:52.667294       7 log.go:172] (0xc0025c6000) Reply frame received for 5
I0720 15:11:52.728463       7 log.go:172] (0xc0025c6000) Data frame received for 5
I0720 15:11:52.728501       7 log.go:172] (0xc0025c6000) Data frame received for 3
I0720 15:11:52.728522       7 log.go:172] (0xc0020163c0) (3) Data frame handling
I0720 15:11:52.728539       7 log.go:172] (0xc0020163c0) (3) Data frame sent
I0720 15:11:52.728544       7 log.go:172] (0xc0025c6000) Data frame received for 3
I0720 15:11:52.728562       7 log.go:172] (0xc00157dea0) (5) Data frame handling
I0720 15:11:52.728641       7 log.go:172] (0xc0020163c0) (3) Data frame handling
I0720 15:11:52.730106       7 log.go:172] (0xc0025c6000) Data frame received for 1
I0720 15:11:52.730147       7 log.go:172] (0xc00157de00) (1) Data frame handling
I0720 15:11:52.730180       7 log.go:172] (0xc00157de00) (1) Data frame sent
I0720 15:11:52.730194       7 log.go:172] (0xc0025c6000) (0xc00157de00) Stream removed, broadcasting: 1
I0720 15:11:52.730241       7 log.go:172] (0xc0025c6000) Go away received
I0720 15:11:52.730300       7 log.go:172] (0xc0025c6000) (0xc00157de00) Stream removed, broadcasting: 1
I0720 15:11:52.730322       7 log.go:172] (0xc0025c6000) (0xc0020163c0) Stream removed, broadcasting: 3
I0720 15:11:52.730334       7 log.go:172] (0xc0025c6000) (0xc00157dea0) Stream removed, broadcasting: 5
Jul 20 15:11:52.730: INFO: Exec stderr: ""
Jul 20 15:11:52.730: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:52.730: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:52.767569       7 log.go:172] (0xc0068b82c0) (0xc001f4f720) Create stream
I0720 15:11:52.767593       7 log.go:172] (0xc0068b82c0) (0xc001f4f720) Stream added, broadcasting: 1
I0720 15:11:52.770066       7 log.go:172] (0xc0068b82c0) Reply frame received for 1
I0720 15:11:52.770106       7 log.go:172] (0xc0068b82c0) (0xc001f4f860) Create stream
I0720 15:11:52.770121       7 log.go:172] (0xc0068b82c0) (0xc001f4f860) Stream added, broadcasting: 3
I0720 15:11:52.771169       7 log.go:172] (0xc0068b82c0) Reply frame received for 3
I0720 15:11:52.771200       7 log.go:172] (0xc0068b82c0) (0xc001f4f9a0) Create stream
I0720 15:11:52.771215       7 log.go:172] (0xc0068b82c0) (0xc001f4f9a0) Stream added, broadcasting: 5
I0720 15:11:52.772278       7 log.go:172] (0xc0068b82c0) Reply frame received for 5
I0720 15:11:52.825544       7 log.go:172] (0xc0068b82c0) Data frame received for 5
I0720 15:11:52.825584       7 log.go:172] (0xc001f4f9a0) (5) Data frame handling
I0720 15:11:52.825608       7 log.go:172] (0xc0068b82c0) Data frame received for 3
I0720 15:11:52.825619       7 log.go:172] (0xc001f4f860) (3) Data frame handling
I0720 15:11:52.825633       7 log.go:172] (0xc001f4f860) (3) Data frame sent
I0720 15:11:52.825645       7 log.go:172] (0xc0068b82c0) Data frame received for 3
I0720 15:11:52.825655       7 log.go:172] (0xc001f4f860) (3) Data frame handling
I0720 15:11:52.826524       7 log.go:172] (0xc0068b82c0) Data frame received for 1
I0720 15:11:52.826548       7 log.go:172] (0xc001f4f720) (1) Data frame handling
I0720 15:11:52.826561       7 log.go:172] (0xc001f4f720) (1) Data frame sent
I0720 15:11:52.826581       7 log.go:172] (0xc0068b82c0) (0xc001f4f720) Stream removed, broadcasting: 1
I0720 15:11:52.826591       7 log.go:172] (0xc0068b82c0) Go away received
I0720 15:11:52.826724       7 log.go:172] (0xc0068b82c0) (0xc001f4f720) Stream removed, broadcasting: 1
I0720 15:11:52.826748       7 log.go:172] (0xc0068b82c0) (0xc001f4f860) Stream removed, broadcasting: 3
I0720 15:11:52.826759       7 log.go:172] (0xc0068b82c0) (0xc001f4f9a0) Stream removed, broadcasting: 5
Jul 20 15:11:52.826: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul 20 15:11:52.826: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:52.826: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:52.856343       7 log.go:172] (0xc002d84a50) (0xc0020165a0) Create stream
I0720 15:11:52.856376       7 log.go:172] (0xc002d84a50) (0xc0020165a0) Stream added, broadcasting: 1
I0720 15:11:52.858692       7 log.go:172] (0xc002d84a50) Reply frame received for 1
I0720 15:11:52.858734       7 log.go:172] (0xc002d84a50) (0xc001f4fae0) Create stream
I0720 15:11:52.858749       7 log.go:172] (0xc002d84a50) (0xc001f4fae0) Stream added, broadcasting: 3
I0720 15:11:52.859817       7 log.go:172] (0xc002d84a50) Reply frame received for 3
I0720 15:11:52.859857       7 log.go:172] (0xc002d84a50) (0xc000cee780) Create stream
I0720 15:11:52.859871       7 log.go:172] (0xc002d84a50) (0xc000cee780) Stream added, broadcasting: 5
I0720 15:11:52.860865       7 log.go:172] (0xc002d84a50) Reply frame received for 5
I0720 15:11:52.933391       7 log.go:172] (0xc002d84a50) Data frame received for 5
I0720 15:11:52.933413       7 log.go:172] (0xc000cee780) (5) Data frame handling
I0720 15:11:52.933437       7 log.go:172] (0xc002d84a50) Data frame received for 3
I0720 15:11:52.933459       7 log.go:172] (0xc001f4fae0) (3) Data frame handling
I0720 15:11:52.933474       7 log.go:172] (0xc001f4fae0) (3) Data frame sent
I0720 15:11:52.933487       7 log.go:172] (0xc002d84a50) Data frame received for 3
I0720 15:11:52.933492       7 log.go:172] (0xc001f4fae0) (3) Data frame handling
I0720 15:11:52.934489       7 log.go:172] (0xc002d84a50) Data frame received for 1
I0720 15:11:52.934508       7 log.go:172] (0xc0020165a0) (1) Data frame handling
I0720 15:11:52.934516       7 log.go:172] (0xc0020165a0) (1) Data frame sent
I0720 15:11:52.934529       7 log.go:172] (0xc002d84a50) (0xc0020165a0) Stream removed, broadcasting: 1
I0720 15:11:52.934616       7 log.go:172] (0xc002d84a50) (0xc0020165a0) Stream removed, broadcasting: 1
I0720 15:11:52.934638       7 log.go:172] (0xc002d84a50) (0xc001f4fae0) Stream removed, broadcasting: 3
I0720 15:11:52.934651       7 log.go:172] (0xc002d84a50) (0xc000cee780) Stream removed, broadcasting: 5
Jul 20 15:11:52.934: INFO: Exec stderr: ""
I0720 15:11:52.934705       7 log.go:172] (0xc002d84a50) Go away received
Jul 20 15:11:52.934: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:52.934: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:52.960465       7 log.go:172] (0xc000d32b00) (0xc000ceeaa0) Create stream
I0720 15:11:52.960502       7 log.go:172] (0xc000d32b00) (0xc000ceeaa0) Stream added, broadcasting: 1
I0720 15:11:52.962581       7 log.go:172] (0xc000d32b00) Reply frame received for 1
I0720 15:11:52.962607       7 log.go:172] (0xc000d32b00) (0xc000ceeb40) Create stream
I0720 15:11:52.962616       7 log.go:172] (0xc000d32b00) (0xc000ceeb40) Stream added, broadcasting: 3
I0720 15:11:52.963407       7 log.go:172] (0xc000d32b00) Reply frame received for 3
I0720 15:11:52.963436       7 log.go:172] (0xc000d32b00) (0xc001f4fc20) Create stream
I0720 15:11:52.963445       7 log.go:172] (0xc000d32b00) (0xc001f4fc20) Stream added, broadcasting: 5
I0720 15:11:52.964224       7 log.go:172] (0xc000d32b00) Reply frame received for 5
I0720 15:11:53.025314       7 log.go:172] (0xc000d32b00) Data frame received for 5
I0720 15:11:53.025352       7 log.go:172] (0xc001f4fc20) (5) Data frame handling
I0720 15:11:53.025382       7 log.go:172] (0xc000d32b00) Data frame received for 3
I0720 15:11:53.025400       7 log.go:172] (0xc000ceeb40) (3) Data frame handling
I0720 15:11:53.025417       7 log.go:172] (0xc000ceeb40) (3) Data frame sent
I0720 15:11:53.025431       7 log.go:172] (0xc000d32b00) Data frame received for 3
I0720 15:11:53.025444       7 log.go:172] (0xc000ceeb40) (3) Data frame handling
I0720 15:11:53.026947       7 log.go:172] (0xc000d32b00) Data frame received for 1
I0720 15:11:53.026984       7 log.go:172] (0xc000ceeaa0) (1) Data frame handling
I0720 15:11:53.027013       7 log.go:172] (0xc000ceeaa0) (1) Data frame sent
I0720 15:11:53.027040       7 log.go:172] (0xc000d32b00) (0xc000ceeaa0) Stream removed, broadcasting: 1
I0720 15:11:53.027183       7 log.go:172] (0xc000d32b00) (0xc000ceeaa0) Stream removed, broadcasting: 1
I0720 15:11:53.027213       7 log.go:172] (0xc000d32b00) (0xc000ceeb40) Stream removed, broadcasting: 3
I0720 15:11:53.027478       7 log.go:172] (0xc000d32b00) (0xc001f4fc20) Stream removed, broadcasting: 5
Jul 20 15:11:53.027: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul 20 15:11:53.027: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:53.027: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:53.029046       7 log.go:172] (0xc000d32b00) Go away received
I0720 15:11:53.059564       7 log.go:172] (0xc002d851e0) (0xc002016a00) Create stream
I0720 15:11:53.059597       7 log.go:172] (0xc002d851e0) (0xc002016a00) Stream added, broadcasting: 1
I0720 15:11:53.061567       7 log.go:172] (0xc002d851e0) Reply frame received for 1
I0720 15:11:53.061607       7 log.go:172] (0xc002d851e0) (0xc001f4fd60) Create stream
I0720 15:11:53.061620       7 log.go:172] (0xc002d851e0) (0xc001f4fd60) Stream added, broadcasting: 3
I0720 15:11:53.062451       7 log.go:172] (0xc002d851e0) Reply frame received for 3
I0720 15:11:53.062487       7 log.go:172] (0xc002d851e0) (0xc002016aa0) Create stream
I0720 15:11:53.062503       7 log.go:172] (0xc002d851e0) (0xc002016aa0) Stream added, broadcasting: 5
I0720 15:11:53.063359       7 log.go:172] (0xc002d851e0) Reply frame received for 5
I0720 15:11:53.113683       7 log.go:172] (0xc002d851e0) Data frame received for 5
I0720 15:11:53.113711       7 log.go:172] (0xc002016aa0) (5) Data frame handling
I0720 15:11:53.113729       7 log.go:172] (0xc002d851e0) Data frame received for 3
I0720 15:11:53.113737       7 log.go:172] (0xc001f4fd60) (3) Data frame handling
I0720 15:11:53.113748       7 log.go:172] (0xc001f4fd60) (3) Data frame sent
I0720 15:11:53.113756       7 log.go:172] (0xc002d851e0) Data frame received for 3
I0720 15:11:53.113763       7 log.go:172] (0xc001f4fd60) (3) Data frame handling
I0720 15:11:53.114899       7 log.go:172] (0xc002d851e0) Data frame received for 1
I0720 15:11:53.114915       7 log.go:172] (0xc002016a00) (1) Data frame handling
I0720 15:11:53.114924       7 log.go:172] (0xc002016a00) (1) Data frame sent
I0720 15:11:53.114944       7 log.go:172] (0xc002d851e0) (0xc002016a00) Stream removed, broadcasting: 1
I0720 15:11:53.114963       7 log.go:172] (0xc002d851e0) Go away received
I0720 15:11:53.115026       7 log.go:172] (0xc002d851e0) (0xc002016a00) Stream removed, broadcasting: 1
I0720 15:11:53.115041       7 log.go:172] (0xc002d851e0) (0xc001f4fd60) Stream removed, broadcasting: 3
I0720 15:11:53.115049       7 log.go:172] (0xc002d851e0) (0xc002016aa0) Stream removed, broadcasting: 5
Jul 20 15:11:53.115: INFO: Exec stderr: ""
Jul 20 15:11:53.115: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:53.115: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:53.139694       7 log.go:172] (0xc0068b88f0) (0xc002074320) Create stream
I0720 15:11:53.139724       7 log.go:172] (0xc0068b88f0) (0xc002074320) Stream added, broadcasting: 1
I0720 15:11:53.143964       7 log.go:172] (0xc0068b88f0) Reply frame received for 1
I0720 15:11:53.144011       7 log.go:172] (0xc0068b88f0) (0xc000ceebe0) Create stream
I0720 15:11:53.144031       7 log.go:172] (0xc0068b88f0) (0xc000ceebe0) Stream added, broadcasting: 3
I0720 15:11:53.145711       7 log.go:172] (0xc0068b88f0) Reply frame received for 3
I0720 15:11:53.145756       7 log.go:172] (0xc0068b88f0) (0xc000ceec80) Create stream
I0720 15:11:53.145767       7 log.go:172] (0xc0068b88f0) (0xc000ceec80) Stream added, broadcasting: 5
I0720 15:11:53.146667       7 log.go:172] (0xc0068b88f0) Reply frame received for 5
I0720 15:11:53.202752       7 log.go:172] (0xc0068b88f0) Data frame received for 5
I0720 15:11:53.202787       7 log.go:172] (0xc000ceec80) (5) Data frame handling
I0720 15:11:53.202849       7 log.go:172] (0xc0068b88f0) Data frame received for 3
I0720 15:11:53.202903       7 log.go:172] (0xc000ceebe0) (3) Data frame handling
I0720 15:11:53.202924       7 log.go:172] (0xc000ceebe0) (3) Data frame sent
I0720 15:11:53.202990       7 log.go:172] (0xc0068b88f0) Data frame received for 3
I0720 15:11:53.203029       7 log.go:172] (0xc000ceebe0) (3) Data frame handling
I0720 15:11:53.204859       7 log.go:172] (0xc0068b88f0) Data frame received for 1
I0720 15:11:53.204883       7 log.go:172] (0xc002074320) (1) Data frame handling
I0720 15:11:53.204895       7 log.go:172] (0xc002074320) (1) Data frame sent
I0720 15:11:53.204917       7 log.go:172] (0xc0068b88f0) (0xc002074320) Stream removed, broadcasting: 1
I0720 15:11:53.204943       7 log.go:172] (0xc0068b88f0) Go away received
I0720 15:11:53.205082       7 log.go:172] (0xc0068b88f0) (0xc002074320) Stream removed, broadcasting: 1
I0720 15:11:53.205102       7 log.go:172] (0xc0068b88f0) (0xc000ceebe0) Stream removed, broadcasting: 3
I0720 15:11:53.205114       7 log.go:172] (0xc0068b88f0) (0xc000ceec80) Stream removed, broadcasting: 5
Jul 20 15:11:53.205: INFO: Exec stderr: ""
Jul 20 15:11:53.205: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:53.205: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:53.260081       7 log.go:172] (0xc000d33130) (0xc000cef220) Create stream
I0720 15:11:53.260120       7 log.go:172] (0xc000d33130) (0xc000cef220) Stream added, broadcasting: 1
I0720 15:11:53.262274       7 log.go:172] (0xc000d33130) Reply frame received for 1
I0720 15:11:53.262316       7 log.go:172] (0xc000d33130) (0xc000cef400) Create stream
I0720 15:11:53.262332       7 log.go:172] (0xc000d33130) (0xc000cef400) Stream added, broadcasting: 3
I0720 15:11:53.263443       7 log.go:172] (0xc000d33130) Reply frame received for 3
I0720 15:11:53.263490       7 log.go:172] (0xc000d33130) (0xc002016b40) Create stream
I0720 15:11:53.263507       7 log.go:172] (0xc000d33130) (0xc002016b40) Stream added, broadcasting: 5
I0720 15:11:53.264552       7 log.go:172] (0xc000d33130) Reply frame received for 5
I0720 15:11:53.313925       7 log.go:172] (0xc000d33130) Data frame received for 5
I0720 15:11:53.313953       7 log.go:172] (0xc002016b40) (5) Data frame handling
I0720 15:11:53.313997       7 log.go:172] (0xc000d33130) Data frame received for 3
I0720 15:11:53.314034       7 log.go:172] (0xc000cef400) (3) Data frame handling
I0720 15:11:53.314067       7 log.go:172] (0xc000cef400) (3) Data frame sent
I0720 15:11:53.314251       7 log.go:172] (0xc000d33130) Data frame received for 3
I0720 15:11:53.314263       7 log.go:172] (0xc000cef400) (3) Data frame handling
I0720 15:11:53.315619       7 log.go:172] (0xc000d33130) Data frame received for 1
I0720 15:11:53.315647       7 log.go:172] (0xc000cef220) (1) Data frame handling
I0720 15:11:53.315669       7 log.go:172] (0xc000cef220) (1) Data frame sent
I0720 15:11:53.315692       7 log.go:172] (0xc000d33130) (0xc000cef220) Stream removed, broadcasting: 1
I0720 15:11:53.315745       7 log.go:172] (0xc000d33130) Go away received
I0720 15:11:53.315851       7 log.go:172] (0xc000d33130) (0xc000cef220) Stream removed, broadcasting: 1
I0720 15:11:53.315878       7 log.go:172] (0xc000d33130) (0xc000cef400) Stream removed, broadcasting: 3
I0720 15:11:53.315897       7 log.go:172] (0xc000d33130) (0xc002016b40) Stream removed, broadcasting: 5
Jul 20 15:11:53.315: INFO: Exec stderr: ""
Jul 20 15:11:53.315: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1365 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 15:11:53.315: INFO: >>> kubeConfig: /root/.kube/config
I0720 15:11:53.344683       7 log.go:172] (0xc00474e630) (0xc001460960) Create stream
I0720 15:11:53.344711       7 log.go:172] (0xc00474e630) (0xc001460960) Stream added, broadcasting: 1
I0720 15:11:53.346387       7 log.go:172] (0xc00474e630) Reply frame received for 1
I0720 15:11:53.346422       7 log.go:172] (0xc00474e630) (0xc001460b40) Create stream
I0720 15:11:53.346438       7 log.go:172] (0xc00474e630) (0xc001460b40) Stream added, broadcasting: 3
I0720 15:11:53.347154       7 log.go:172] (0xc00474e630) Reply frame received for 3
I0720 15:11:53.347178       7 log.go:172] (0xc00474e630) (0xc000cef4a0) Create stream
I0720 15:11:53.347188       7 log.go:172] (0xc00474e630) (0xc000cef4a0) Stream added, broadcasting: 5
I0720 15:11:53.347839       7 log.go:172] (0xc00474e630) Reply frame received for 5
I0720 15:11:53.417367       7 log.go:172] (0xc00474e630) Data frame received for 5
I0720 15:11:53.417403       7 log.go:172] (0xc00474e630) Data frame received for 3
I0720 15:11:53.417431       7 log.go:172] (0xc001460b40) (3) Data frame handling
I0720 15:11:53.417440       7 log.go:172] (0xc001460b40) (3) Data frame sent
I0720 15:11:53.417449       7 log.go:172] (0xc00474e630) Data frame received for 3
I0720 15:11:53.417454       7 log.go:172] (0xc001460b40) (3) Data frame handling
I0720 15:11:53.417462       7 log.go:172] (0xc000cef4a0) (5) Data frame handling
I0720 15:11:53.418416       7 log.go:172] (0xc00474e630) Data frame received for 1
I0720 15:11:53.418433       7 log.go:172] (0xc001460960) (1) Data frame handling
I0720 15:11:53.418452       7 log.go:172] (0xc001460960) (1) Data frame sent
I0720 15:11:53.418467       7 log.go:172] (0xc00474e630) (0xc001460960) Stream removed, broadcasting: 1
I0720 15:11:53.418485       7 log.go:172] (0xc00474e630) Go away received
I0720 15:11:53.418566       7 log.go:172] (0xc00474e630) (0xc001460960) Stream removed, broadcasting: 1
I0720 15:11:53.418581       7 log.go:172] (0xc00474e630) (0xc001460b40) Stream removed, broadcasting: 3
I0720 15:11:53.418593       7 log.go:172] (0xc00474e630) (0xc000cef4a0) Stream removed, broadcasting: 5
Jul 20 15:11:53.418: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:11:53.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1365" for this suite.

• [SLOW TEST:11.192 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4058,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:11:53.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Jul 20 15:11:53.521: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:11:53.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-714" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":243,"skipped":4086,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:11:53.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:11:57.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7474" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4099,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:11:57.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:12:13.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8670" for this suite.

• [SLOW TEST:16.227 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":245,"skipped":4107,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:12:13.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Jul 20 15:12:14.045: INFO: Waiting up to 5m0s for pod "client-containers-18682a77-395a-488e-abaa-26b448672723" in namespace "containers-1438" to be "Succeeded or Failed"
Jul 20 15:12:14.068: INFO: Pod "client-containers-18682a77-395a-488e-abaa-26b448672723": Phase="Pending", Reason="", readiness=false. Elapsed: 22.337983ms
Jul 20 15:12:16.074: INFO: Pod "client-containers-18682a77-395a-488e-abaa-26b448672723": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028364792s
Jul 20 15:12:18.165: INFO: Pod "client-containers-18682a77-395a-488e-abaa-26b448672723": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119939483s
STEP: Saw pod success
Jul 20 15:12:18.165: INFO: Pod "client-containers-18682a77-395a-488e-abaa-26b448672723" satisfied condition "Succeeded or Failed"
Jul 20 15:12:18.169: INFO: Trying to get logs from node kali-worker pod client-containers-18682a77-395a-488e-abaa-26b448672723 container test-container: 
STEP: delete the pod
Jul 20 15:12:18.225: INFO: Waiting for pod client-containers-18682a77-395a-488e-abaa-26b448672723 to disappear
Jul 20 15:12:18.235: INFO: Pod client-containers-18682a77-395a-488e-abaa-26b448672723 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:12:18.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1438" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4144,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:12:18.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 20 15:12:18.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3973'
Jul 20 15:12:18.437: INFO: stderr: ""
Jul 20 15:12:18.437: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul 20 15:12:23.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3973 -o json'
Jul 20 15:12:23.575: INFO: stderr: ""
Jul 20 15:12:23.575: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-20T15:12:18Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-20T15:12:18Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.1.5\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-20T15:12:21Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3973\",\n        \"resourceVersion\": \"2750631\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3973/pods/e2e-test-httpd-pod\",\n        \"uid\": \"391b88ba-5b28-465d-9c0f-968bf935681f\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-h67xz\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-h67xz\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-h67xz\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T15:12:18Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T15:12:21Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T15:12:21Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T15:12:18Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://3d5088d7a5afe29f8d3fc13f86d25a81f21585fcbf9ec2f134c44e7b95f8fdda\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-20T15:12:21Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.15\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.5\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.5\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-20T15:12:18Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul 20 15:12:23.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3973'
Jul 20 15:12:23.890: INFO: stderr: ""
Jul 20 15:12:23.890: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul 20 15:12:23.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3973'
Jul 20 15:12:33.464: INFO: stderr: ""
Jul 20 15:12:33.464: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:12:33.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3973" for this suite.

• [SLOW TEST:15.311 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":247,"skipped":4155,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:12:33.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 20 15:12:33.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e500356-3305-449d-9765-377e362e5786" in namespace "projected-9627" to be "Succeeded or Failed"
Jul 20 15:12:33.977: INFO: Pod "downwardapi-volume-4e500356-3305-449d-9765-377e362e5786": Phase="Pending", Reason="", readiness=false. Elapsed: 39.574105ms
Jul 20 15:12:36.142: INFO: Pod "downwardapi-volume-4e500356-3305-449d-9765-377e362e5786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203823791s
Jul 20 15:12:38.145: INFO: Pod "downwardapi-volume-4e500356-3305-449d-9765-377e362e5786": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20700871s
Jul 20 15:12:40.148: INFO: Pod "downwardapi-volume-4e500356-3305-449d-9765-377e362e5786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210092245s
STEP: Saw pod success
Jul 20 15:12:40.148: INFO: Pod "downwardapi-volume-4e500356-3305-449d-9765-377e362e5786" satisfied condition "Succeeded or Failed"
Jul 20 15:12:40.151: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-4e500356-3305-449d-9765-377e362e5786 container client-container: 
STEP: delete the pod
Jul 20 15:12:40.182: INFO: Waiting for pod downwardapi-volume-4e500356-3305-449d-9765-377e362e5786 to disappear
Jul 20 15:12:40.206: INFO: Pod downwardapi-volume-4e500356-3305-449d-9765-377e362e5786 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:12:40.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9627" for this suite.

• [SLOW TEST:6.659 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4178,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:12:40.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3566
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3566
I0720 15:12:40.774575       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3566, replica count: 2
I0720 15:12:43.825244       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 15:12:46.825512       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 15:12:46.825: INFO: Creating new exec pod
Jul 20 15:12:54.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3566 execpodc9t2w -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 20 15:12:54.512: INFO: stderr: "I0720 15:12:54.457405    3243 log.go:172] (0xc0006e8790) (0xc0006ac3c0) Create stream\nI0720 15:12:54.457448    3243 log.go:172] (0xc0006e8790) (0xc0006ac3c0) Stream added, broadcasting: 1\nI0720 15:12:54.459919    3243 log.go:172] (0xc0006e8790) Reply frame received for 1\nI0720 15:12:54.459966    3243 log.go:172] (0xc0006e8790) (0xc0005bb7c0) Create stream\nI0720 15:12:54.459981    3243 log.go:172] (0xc0006e8790) (0xc0005bb7c0) Stream added, broadcasting: 3\nI0720 15:12:54.460967    3243 log.go:172] (0xc0006e8790) Reply frame received for 3\nI0720 15:12:54.461006    3243 log.go:172] (0xc0006e8790) (0xc0006ac460) Create stream\nI0720 15:12:54.461020    3243 log.go:172] (0xc0006e8790) (0xc0006ac460) Stream added, broadcasting: 5\nI0720 15:12:54.461963    3243 log.go:172] (0xc0006e8790) Reply frame received for 5\nI0720 15:12:54.504271    3243 log.go:172] (0xc0006e8790) Data frame received for 5\nI0720 15:12:54.504304    3243 log.go:172] (0xc0006ac460) (5) Data frame handling\nI0720 15:12:54.504326    3243 log.go:172] (0xc0006ac460) (5) Data frame sent\nI0720 15:12:54.504338    3243 log.go:172] (0xc0006e8790) Data frame received for 5\nI0720 15:12:54.504344    3243 log.go:172] (0xc0006ac460) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0720 15:12:54.504366    3243 log.go:172] (0xc0006ac460) (5) Data frame sent\nI0720 15:12:54.504467    3243 log.go:172] (0xc0006e8790) Data frame received for 5\nI0720 15:12:54.504484    3243 log.go:172] (0xc0006ac460) (5) Data frame handling\nI0720 15:12:54.504752    3243 log.go:172] (0xc0006e8790) Data frame received for 3\nI0720 15:12:54.504775    3243 log.go:172] (0xc0005bb7c0) (3) Data frame handling\nI0720 15:12:54.506599    3243 log.go:172] (0xc0006e8790) Data frame received for 1\nI0720 15:12:54.506620    3243 log.go:172] (0xc0006ac3c0) (1) Data frame handling\nI0720 15:12:54.506635    3243 log.go:172] (0xc0006ac3c0) (1) Data frame sent\nI0720 15:12:54.506664    3243 log.go:172] (0xc0006e8790) (0xc0006ac3c0) Stream removed, broadcasting: 1\nI0720 15:12:54.506688    3243 log.go:172] (0xc0006e8790) Go away received\nI0720 15:12:54.506915    3243 log.go:172] (0xc0006e8790) (0xc0006ac3c0) Stream removed, broadcasting: 1\nI0720 15:12:54.506932    3243 log.go:172] (0xc0006e8790) (0xc0005bb7c0) Stream removed, broadcasting: 3\nI0720 15:12:54.506938    3243 log.go:172] (0xc0006e8790) (0xc0006ac460) Stream removed, broadcasting: 5\n"
Jul 20 15:12:54.512: INFO: stdout: ""
Jul 20 15:12:54.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3566 execpodc9t2w -- /bin/sh -x -c nc -zv -t -w 2 10.111.56.235 80'
Jul 20 15:12:54.708: INFO: stderr: "I0720 15:12:54.640640    3263 log.go:172] (0xc00099f4a0) (0xc000afd360) Create stream\nI0720 15:12:54.640690    3263 log.go:172] (0xc00099f4a0) (0xc000afd360) Stream added, broadcasting: 1\nI0720 15:12:54.644883    3263 log.go:172] (0xc00099f4a0) Reply frame received for 1\nI0720 15:12:54.644923    3263 log.go:172] (0xc00099f4a0) (0xc00063f680) Create stream\nI0720 15:12:54.644936    3263 log.go:172] (0xc00099f4a0) (0xc00063f680) Stream added, broadcasting: 3\nI0720 15:12:54.645879    3263 log.go:172] (0xc00099f4a0) Reply frame received for 3\nI0720 15:12:54.645918    3263 log.go:172] (0xc00099f4a0) (0xc0004faaa0) Create stream\nI0720 15:12:54.645931    3263 log.go:172] (0xc00099f4a0) (0xc0004faaa0) Stream added, broadcasting: 5\nI0720 15:12:54.646902    3263 log.go:172] (0xc00099f4a0) Reply frame received for 5\nI0720 15:12:54.700932    3263 log.go:172] (0xc00099f4a0) Data frame received for 5\nI0720 15:12:54.700963    3263 log.go:172] (0xc0004faaa0) (5) Data frame handling\nI0720 15:12:54.700985    3263 log.go:172] (0xc0004faaa0) (5) Data frame sent\nI0720 15:12:54.700996    3263 log.go:172] (0xc00099f4a0) Data frame received for 5\nI0720 15:12:54.701006    3263 log.go:172] (0xc0004faaa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.56.235 80\nConnection to 10.111.56.235 80 port [tcp/http] succeeded!\nI0720 15:12:54.701032    3263 log.go:172] (0xc0004faaa0) (5) Data frame sent\nI0720 15:12:54.701359    3263 log.go:172] (0xc00099f4a0) Data frame received for 3\nI0720 15:12:54.701389    3263 log.go:172] (0xc00063f680) (3) Data frame handling\nI0720 15:12:54.701419    3263 log.go:172] (0xc00099f4a0) Data frame received for 5\nI0720 15:12:54.701432    3263 log.go:172] (0xc0004faaa0) (5) Data frame handling\nI0720 15:12:54.702969    3263 log.go:172] (0xc00099f4a0) Data frame received for 1\nI0720 15:12:54.703010    3263 log.go:172] (0xc000afd360) (1) Data frame handling\nI0720 15:12:54.703037    3263 log.go:172] (0xc000afd360) (1) Data frame sent\nI0720 15:12:54.703067    3263 log.go:172] (0xc00099f4a0) (0xc000afd360) Stream removed, broadcasting: 1\nI0720 15:12:54.703116    3263 log.go:172] (0xc00099f4a0) Go away received\nI0720 15:12:54.703561    3263 log.go:172] (0xc00099f4a0) (0xc000afd360) Stream removed, broadcasting: 1\nI0720 15:12:54.703600    3263 log.go:172] (0xc00099f4a0) (0xc00063f680) Stream removed, broadcasting: 3\nI0720 15:12:54.703624    3263 log.go:172] (0xc00099f4a0) (0xc0004faaa0) Stream removed, broadcasting: 5\n"
Jul 20 15:12:54.708: INFO: stdout: ""
Jul 20 15:12:54.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3566 execpodc9t2w -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30402'
Jul 20 15:12:54.918: INFO: stderr: "I0720 15:12:54.844539    3284 log.go:172] (0xc00072abb0) (0xc000724140) Create stream\nI0720 15:12:54.844599    3284 log.go:172] (0xc00072abb0) (0xc000724140) Stream added, broadcasting: 1\nI0720 15:12:54.846632    3284 log.go:172] (0xc00072abb0) Reply frame received for 1\nI0720 15:12:54.846664    3284 log.go:172] (0xc00072abb0) (0xc000643360) Create stream\nI0720 15:12:54.846672    3284 log.go:172] (0xc00072abb0) (0xc000643360) Stream added, broadcasting: 3\nI0720 15:12:54.847481    3284 log.go:172] (0xc00072abb0) Reply frame received for 3\nI0720 15:12:54.847515    3284 log.go:172] (0xc00072abb0) (0xc0000c6000) Create stream\nI0720 15:12:54.847523    3284 log.go:172] (0xc00072abb0) (0xc0000c6000) Stream added, broadcasting: 5\nI0720 15:12:54.848283    3284 log.go:172] (0xc00072abb0) Reply frame received for 5\nI0720 15:12:54.910845    3284 log.go:172] (0xc00072abb0) Data frame received for 5\nI0720 15:12:54.910880    3284 log.go:172] (0xc0000c6000) (5) Data frame handling\nI0720 15:12:54.910914    3284 log.go:172] (0xc0000c6000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 30402\nConnection to 172.18.0.13 30402 port [tcp/30402] succeeded!\nI0720 15:12:54.911085    3284 log.go:172] (0xc00072abb0) Data frame received for 5\nI0720 15:12:54.911130    3284 log.go:172] (0xc0000c6000) (5) Data frame handling\nI0720 15:12:54.911410    3284 log.go:172] (0xc00072abb0) Data frame received for 3\nI0720 15:12:54.911433    3284 log.go:172] (0xc000643360) (3) Data frame handling\nI0720 15:12:54.913219    3284 log.go:172] (0xc00072abb0) Data frame received for 1\nI0720 15:12:54.913254    3284 log.go:172] (0xc000724140) (1) Data frame handling\nI0720 15:12:54.913270    3284 log.go:172] (0xc000724140) (1) Data frame sent\nI0720 15:12:54.913284    3284 log.go:172] (0xc00072abb0) (0xc000724140) Stream removed, broadcasting: 1\nI0720 15:12:54.913299    3284 log.go:172] (0xc00072abb0) Go away received\nI0720 15:12:54.913762    3284 log.go:172] (0xc00072abb0) (0xc000724140) Stream removed, broadcasting: 1\nI0720 15:12:54.913786    3284 log.go:172] (0xc00072abb0) (0xc000643360) Stream removed, broadcasting: 3\nI0720 15:12:54.913798    3284 log.go:172] (0xc00072abb0) (0xc0000c6000) Stream removed, broadcasting: 5\n"
Jul 20 15:12:54.918: INFO: stdout: ""
Jul 20 15:12:54.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3566 execpodc9t2w -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30402'
Jul 20 15:12:55.124: INFO: stderr: "I0720 15:12:55.050715    3306 log.go:172] (0xc000a1a000) (0xc000b1e000) Create stream\nI0720 15:12:55.050776    3306 log.go:172] (0xc000a1a000) (0xc000b1e000) Stream added, broadcasting: 1\nI0720 15:12:55.054188    3306 log.go:172] (0xc000a1a000) Reply frame received for 1\nI0720 15:12:55.054223    3306 log.go:172] (0xc000a1a000) (0xc000b1e0a0) Create stream\nI0720 15:12:55.054234    3306 log.go:172] (0xc000a1a000) (0xc000b1e0a0) Stream added, broadcasting: 3\nI0720 15:12:55.055211    3306 log.go:172] (0xc000a1a000) Reply frame received for 3\nI0720 15:12:55.055253    3306 log.go:172] (0xc000a1a000) (0xc000a3a000) Create stream\nI0720 15:12:55.055279    3306 log.go:172] (0xc000a1a000) (0xc000a3a000) Stream added, broadcasting: 5\nI0720 15:12:55.056244    3306 log.go:172] (0xc000a1a000) Reply frame received for 5\nI0720 15:12:55.115079    3306 log.go:172] (0xc000a1a000) Data frame received for 5\nI0720 15:12:55.115106    3306 log.go:172] (0xc000a3a000) (5) Data frame handling\nI0720 15:12:55.115126    3306 log.go:172] (0xc000a3a000) (5) Data frame sent\nI0720 15:12:55.115135    3306 log.go:172] (0xc000a1a000) Data frame received for 5\nI0720 15:12:55.115143    3306 log.go:172] (0xc000a3a000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30402\nConnection to 172.18.0.15 30402 port [tcp/30402] succeeded!\nI0720 15:12:55.115172    3306 log.go:172] (0xc000a3a000) (5) Data frame sent\nI0720 15:12:55.115557    3306 log.go:172] (0xc000a1a000) Data frame received for 3\nI0720 15:12:55.115574    3306 log.go:172] (0xc000b1e0a0) (3) Data frame handling\nI0720 15:12:55.115620    3306 log.go:172] (0xc000a1a000) Data frame received for 5\nI0720 15:12:55.115647    3306 log.go:172] (0xc000a3a000) (5) Data frame handling\nI0720 15:12:55.117816    3306 log.go:172] (0xc000a1a000) Data frame received for 1\nI0720 15:12:55.117836    3306 log.go:172] (0xc000b1e000) (1) Data frame handling\nI0720 15:12:55.117847    3306 log.go:172] (0xc000b1e000) (1) Data frame sent\nI0720 15:12:55.117859    3306 log.go:172] (0xc000a1a000) (0xc000b1e000) Stream removed, broadcasting: 1\nI0720 15:12:55.117913    3306 log.go:172] (0xc000a1a000) Go away received\nI0720 15:12:55.118373    3306 log.go:172] (0xc000a1a000) (0xc000b1e000) Stream removed, broadcasting: 1\nI0720 15:12:55.118400    3306 log.go:172] (0xc000a1a000) (0xc000b1e0a0) Stream removed, broadcasting: 3\nI0720 15:12:55.118413    3306 log.go:172] (0xc000a1a000) (0xc000a3a000) Stream removed, broadcasting: 5\n"
Jul 20 15:12:55.124: INFO: stdout: ""
Jul 20 15:12:55.124: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:12:55.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3566" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.014 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":249,"skipped":4201,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:12:55.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:12:55.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7044" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":250,"skipped":4233,"failed":0}
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:12:55.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Jul 20 15:12:55.479: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5409" to be "Succeeded or Failed"
Jul 20 15:12:55.482: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.723501ms
Jul 20 15:12:57.604: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125220436s
Jul 20 15:12:59.607: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12883786s
Jul 20 15:13:01.611: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.132463693s
Jul 20 15:13:03.615: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136775852s
STEP: Saw pod success
Jul 20 15:13:03.615: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul 20 15:13:03.620: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul 20 15:13:03.877: INFO: Waiting for pod pod-host-path-test to disappear
Jul 20 15:13:03.979: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:13:03.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5409" for this suite.

• [SLOW TEST:8.646 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4238,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:13:04.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Jul 20 15:13:08.775: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1228 pod-service-account-5c2c3383-4eab-4a7d-ac03-82010515f3b0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jul 20 15:13:08.994: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1228 pod-service-account-5c2c3383-4eab-4a7d-ac03-82010515f3b0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jul 20 15:13:09.230: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1228 pod-service-account-5c2c3383-4eab-4a7d-ac03-82010515f3b0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:13:09.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1228" for this suite.

• [SLOW TEST:5.397 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":252,"skipped":4252,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:13:09.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Jul 20 15:13:09.497: INFO: Waiting up to 5m0s for pod "client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02" in namespace "containers-2818" to be "Succeeded or Failed"
Jul 20 15:13:09.514: INFO: Pod "client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02": Phase="Pending", Reason="", readiness=false. Elapsed: 16.391816ms
Jul 20 15:13:11.518: INFO: Pod "client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020715242s
Jul 20 15:13:13.523: INFO: Pod "client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025700185s
STEP: Saw pod success
Jul 20 15:13:13.523: INFO: Pod "client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02" satisfied condition "Succeeded or Failed"
Jul 20 15:13:13.527: INFO: Trying to get logs from node kali-worker pod client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02 container test-container: 
STEP: delete the pod
Jul 20 15:13:13.560: INFO: Waiting for pod client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02 to disappear
Jul 20 15:13:13.573: INFO: Pod client-containers-450deb5b-738f-4cfc-bc96-0a74cc4ede02 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:13:13.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2818" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4254,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:13:13.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jul 20 15:13:13.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4201'
Jul 20 15:13:13.924: INFO: stderr: ""
Jul 20 15:13:13.924: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 15:13:13.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4201'
Jul 20 15:13:14.055: INFO: stderr: ""
Jul 20 15:13:14.055: INFO: stdout: "update-demo-nautilus-jxwrn update-demo-nautilus-wv8h9 "
Jul 20 15:13:14.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:14.181: INFO: stderr: ""
Jul 20 15:13:14.181: INFO: stdout: ""
Jul 20 15:13:14.181: INFO: update-demo-nautilus-jxwrn is created but not running
Jul 20 15:13:19.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4201'
Jul 20 15:13:19.463: INFO: stderr: ""
Jul 20 15:13:19.463: INFO: stdout: "update-demo-nautilus-jxwrn update-demo-nautilus-wv8h9 "
Jul 20 15:13:19.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:19.699: INFO: stderr: ""
Jul 20 15:13:19.699: INFO: stdout: ""
Jul 20 15:13:19.699: INFO: update-demo-nautilus-jxwrn is created but not running
Jul 20 15:13:24.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4201'
Jul 20 15:13:24.795: INFO: stderr: ""
Jul 20 15:13:24.795: INFO: stdout: "update-demo-nautilus-jxwrn update-demo-nautilus-wv8h9 "
Jul 20 15:13:24.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:24.886: INFO: stderr: ""
Jul 20 15:13:24.886: INFO: stdout: "true"
Jul 20 15:13:24.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:24.970: INFO: stderr: ""
Jul 20 15:13:24.970: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 15:13:24.970: INFO: validating pod update-demo-nautilus-jxwrn
Jul 20 15:13:24.999: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 15:13:24.999: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 15:13:24.999: INFO: update-demo-nautilus-jxwrn is verified up and running
Jul 20 15:13:24.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wv8h9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:25.093: INFO: stderr: ""
Jul 20 15:13:25.093: INFO: stdout: "true"
Jul 20 15:13:25.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wv8h9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:25.189: INFO: stderr: ""
Jul 20 15:13:25.189: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 15:13:25.189: INFO: validating pod update-demo-nautilus-wv8h9
Jul 20 15:13:25.193: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 15:13:25.193: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 15:13:25.193: INFO: update-demo-nautilus-wv8h9 is verified up and running
STEP: scaling down the replication controller
Jul 20 15:13:25.196: INFO: scanned /root for discovery docs: 
Jul 20 15:13:25.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4201'
Jul 20 15:13:26.350: INFO: stderr: ""
Jul 20 15:13:26.350: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 15:13:26.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4201'
Jul 20 15:13:26.458: INFO: stderr: ""
Jul 20 15:13:26.458: INFO: stdout: "update-demo-nautilus-jxwrn update-demo-nautilus-wv8h9 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 20 15:13:31.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4201'
Jul 20 15:13:31.565: INFO: stderr: ""
Jul 20 15:13:31.565: INFO: stdout: "update-demo-nautilus-wv8h9 "
Jul 20 15:13:31.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wv8h9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:31.647: INFO: stderr: ""
Jul 20 15:13:31.647: INFO: stdout: "true"
Jul 20 15:13:31.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wv8h9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:31.753: INFO: stderr: ""
Jul 20 15:13:31.753: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 15:13:31.753: INFO: validating pod update-demo-nautilus-wv8h9
Jul 20 15:13:31.757: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 15:13:31.757: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 15:13:31.757: INFO: update-demo-nautilus-wv8h9 is verified up and running
STEP: scaling up the replication controller
Jul 20 15:13:31.760: INFO: scanned /root for discovery docs: 
Jul 20 15:13:31.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4201'
Jul 20 15:13:32.886: INFO: stderr: ""
Jul 20 15:13:32.886: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 15:13:32.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4201'
Jul 20 15:13:32.985: INFO: stderr: ""
Jul 20 15:13:32.985: INFO: stdout: "update-demo-nautilus-ftbn7 update-demo-nautilus-wv8h9 "
Jul 20 15:13:32.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftbn7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:33.075: INFO: stderr: ""
Jul 20 15:13:33.075: INFO: stdout: ""
Jul 20 15:13:33.075: INFO: update-demo-nautilus-ftbn7 is created but not running
Jul 20 15:13:38.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4201'
Jul 20 15:13:38.220: INFO: stderr: ""
Jul 20 15:13:38.220: INFO: stdout: "update-demo-nautilus-ftbn7 update-demo-nautilus-wv8h9 "
Jul 20 15:13:38.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftbn7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:38.322: INFO: stderr: ""
Jul 20 15:13:38.322: INFO: stdout: "true"
Jul 20 15:13:38.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftbn7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:38.417: INFO: stderr: ""
Jul 20 15:13:38.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 15:13:38.417: INFO: validating pod update-demo-nautilus-ftbn7
Jul 20 15:13:38.422: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 15:13:38.422: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 15:13:38.422: INFO: update-demo-nautilus-ftbn7 is verified up and running
Jul 20 15:13:38.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wv8h9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:38.505: INFO: stderr: ""
Jul 20 15:13:38.505: INFO: stdout: "true"
Jul 20 15:13:38.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wv8h9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4201'
Jul 20 15:13:38.603: INFO: stderr: ""
Jul 20 15:13:38.603: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 15:13:38.603: INFO: validating pod update-demo-nautilus-wv8h9
Jul 20 15:13:38.606: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 15:13:38.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 15:13:38.606: INFO: update-demo-nautilus-wv8h9 is verified up and running
STEP: using delete to clean up resources
Jul 20 15:13:38.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4201'
Jul 20 15:13:38.729: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 15:13:38.729: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 20 15:13:38.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4201'
Jul 20 15:13:38.830: INFO: stderr: "No resources found in kubectl-4201 namespace.\n"
Jul 20 15:13:38.830: INFO: stdout: ""
Jul 20 15:13:38.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4201 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 15:13:38.938: INFO: stderr: ""
Jul 20 15:13:38.938: INFO: stdout: "update-demo-nautilus-ftbn7\nupdate-demo-nautilus-wv8h9\n"
Jul 20 15:13:39.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4201'
Jul 20 15:13:39.544: INFO: stderr: "No resources found in kubectl-4201 namespace.\n"
Jul 20 15:13:39.544: INFO: stdout: ""
Jul 20 15:13:39.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4201 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 15:13:39.682: INFO: stderr: ""
Jul 20 15:13:39.682: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:13:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4201" for this suite.

• [SLOW TEST:26.107 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":254,"skipped":4280,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:13:39.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-82cd2b76-4b5b-44af-85d3-6a72bc229376
STEP: Creating a pod to test consume configMaps
Jul 20 15:13:40.444: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682" in namespace "projected-6449" to be "Succeeded or Failed"
Jul 20 15:13:40.482: INFO: Pod "pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682": Phase="Pending", Reason="", readiness=false. Elapsed: 37.773662ms
Jul 20 15:13:42.609: INFO: Pod "pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164307806s
Jul 20 15:13:44.613: INFO: Pod "pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168330218s
STEP: Saw pod success
Jul 20 15:13:44.613: INFO: Pod "pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682" satisfied condition "Succeeded or Failed"
Jul 20 15:13:44.616: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 15:13:44.707: INFO: Waiting for pod pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682 to disappear
Jul 20 15:13:44.741: INFO: Pod pod-projected-configmaps-04923ac5-c22e-432e-9450-f342e5613682 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:13:44.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6449" for this suite.

• [SLOW TEST:5.084 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4287,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:13:44.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 15:13:45.563: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 15:13:47.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 15:13:49.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854825, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 15:13:52.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:13:52.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7565" for this suite.
STEP: Destroying namespace "webhook-7565-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.167 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":256,"skipped":4291,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:13:52.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 20 15:13:57.607: INFO: Successfully updated pod "pod-update-activedeadlineseconds-331fe0de-a91c-45f0-9cb4-b8615700c6c3"
Jul 20 15:13:57.607: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-331fe0de-a91c-45f0-9cb4-b8615700c6c3" in namespace "pods-9625" to be "terminated due to deadline exceeded"
Jul 20 15:13:57.635: INFO: Pod "pod-update-activedeadlineseconds-331fe0de-a91c-45f0-9cb4-b8615700c6c3": Phase="Running", Reason="", readiness=true. Elapsed: 28.039525ms
Jul 20 15:13:59.650: INFO: Pod "pod-update-activedeadlineseconds-331fe0de-a91c-45f0-9cb4-b8615700c6c3": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.043213906s
Jul 20 15:13:59.650: INFO: Pod "pod-update-activedeadlineseconds-331fe0de-a91c-45f0-9cb4-b8615700c6c3" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:13:59.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9625" for this suite.

• [SLOW TEST:6.895 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4294,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:13:59.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:14:16.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2513" for this suite.

• [SLOW TEST:16.982 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":258,"skipped":4311,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:14:16.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:14:24.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9129" for this suite.

• [SLOW TEST:7.331 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":259,"skipped":4335,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:14:24.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-765a583e-4299-454d-9931-9c0442ae14ac
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-765a583e-4299-454d-9931-9c0442ae14ac
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:14:30.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5174" for this suite.

• [SLOW TEST:6.728 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4340,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:14:30.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-0e33f9c9-b9c0-4f70-a8af-4be16e536362
STEP: Creating a pod to test consume secrets
Jul 20 15:14:31.130: INFO: Waiting up to 5m0s for pod "pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197" in namespace "secrets-9296" to be "Succeeded or Failed"
Jul 20 15:14:31.163: INFO: Pod "pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197": Phase="Pending", Reason="", readiness=false. Elapsed: 32.915565ms
Jul 20 15:14:33.202: INFO: Pod "pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071459625s
Jul 20 15:14:35.205: INFO: Pod "pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074932647s
Jul 20 15:14:37.292: INFO: Pod "pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162081895s
STEP: Saw pod success
Jul 20 15:14:37.293: INFO: Pod "pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197" satisfied condition "Succeeded or Failed"
Jul 20 15:14:37.384: INFO: Trying to get logs from node kali-worker pod pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197 container secret-volume-test: 
STEP: delete the pod
Jul 20 15:14:37.953: INFO: Waiting for pod pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197 to disappear
Jul 20 15:14:38.534: INFO: Pod pod-secrets-4fd5d6e0-50ad-4a04-b766-97112bd44197 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:14:38.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9296" for this suite.

• [SLOW TEST:7.925 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4383,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:14:38.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:14:39.448: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:14:40.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7082" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":262,"skipped":4415,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:14:40.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jul 20 15:14:40.950: INFO: >>> kubeConfig: /root/.kube/config
Jul 20 15:14:43.907: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:14:54.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5049" for this suite.

• [SLOW TEST:13.775 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":263,"skipped":4429,"failed":0}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:14:54.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 20 15:14:54.601: INFO: Waiting up to 5m0s for pod "downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07" in namespace "downward-api-329" to be "Succeeded or Failed"
Jul 20 15:14:54.651: INFO: Pod "downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07": Phase="Pending", Reason="", readiness=false. Elapsed: 50.461358ms
Jul 20 15:14:56.788: INFO: Pod "downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18740407s
Jul 20 15:14:58.792: INFO: Pod "downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191483297s
Jul 20 15:15:00.853: INFO: Pod "downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.251803115s
STEP: Saw pod success
Jul 20 15:15:00.853: INFO: Pod "downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07" satisfied condition "Succeeded or Failed"
Jul 20 15:15:00.856: INFO: Trying to get logs from node kali-worker2 pod downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07 container dapi-container: 
STEP: delete the pod
Jul 20 15:15:00.917: INFO: Waiting for pod downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07 to disappear
Jul 20 15:15:00.924: INFO: Pod downward-api-58e24bcc-4ff4-445f-9892-c70d14cf7d07 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:15:00.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-329" for this suite.

• [SLOW TEST:6.386 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4434,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:15:00.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul 20 15:15:05.298: INFO: &Pod{ObjectMeta:{send-events-3ce8b194-d87b-4562-a81c-5bf0135d237c  events-3821 /api/v1/namespaces/events-3821/pods/send-events-3ce8b194-d87b-4562-a81c-5bf0135d237c c3337d4e-01c4-41cc-b5d5-a008fd28a7bb 2751715 0 2020-07-20 15:15:01 +0000 UTC   map[name:foo time:79707852] map[] [] []  [{e2e.test Update v1 2020-07-20 15:15:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-20 15:15:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t6l7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t6l7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t6l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:15:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:15:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:15:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 15:15:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.14,StartTime:2020-07-20 15:15:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 15:15:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://046a2302b972a5ad7f75f7b50f4ba6f37c5e16ba5de5316de7ebb252b3b6a0d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jul 20 15:15:07.310: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul 20 15:15:09.315: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:15:09.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3821" for this suite.

• [SLOW TEST:8.449 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":265,"skipped":4440,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:15:09.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 15:15:11.003: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 15:15:13.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854911, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854911, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854911, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854910, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 15:15:16.048: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:15:16.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8942" for this suite.
STEP: Destroying namespace "webhook-8942-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.353 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":266,"skipped":4488,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:15:16.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 15:15:17.612: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Jul 20 15:15:19.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 15:15:21.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730854917, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 15:15:24.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:15:25.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9757" for this suite.
STEP: Destroying namespace "webhook-9757-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.443 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":267,"skipped":4495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:15:25.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 20 15:15:32.293: INFO: Successfully updated pod "pod-update-ad10a476-2a32-45d2-8d2a-6e5325ec36d1"
STEP: verifying the updated pod is in kubernetes
Jul 20 15:15:32.423: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:15:32.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3543" for this suite.

• [SLOW TEST:7.252 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:15:32.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:15:33.022: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul 20 15:15:33.107: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:33.160: INFO: Number of nodes with available pods: 0
Jul 20 15:15:33.160: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:34.166: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:34.170: INFO: Number of nodes with available pods: 0
Jul 20 15:15:34.170: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:35.431: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:35.489: INFO: Number of nodes with available pods: 0
Jul 20 15:15:35.489: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:36.167: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:36.170: INFO: Number of nodes with available pods: 0
Jul 20 15:15:36.170: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:37.165: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:37.168: INFO: Number of nodes with available pods: 0
Jul 20 15:15:37.168: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:38.198: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:38.243: INFO: Number of nodes with available pods: 1
Jul 20 15:15:38.243: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:39.262: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:39.431: INFO: Number of nodes with available pods: 1
Jul 20 15:15:39.431: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:40.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:40.311: INFO: Number of nodes with available pods: 1
Jul 20 15:15:40.311: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:15:41.166: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:41.169: INFO: Number of nodes with available pods: 2
Jul 20 15:15:41.169: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul 20 15:15:41.437: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:41.437: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:41.517: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:42.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:42.521: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:42.562: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:43.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:43.521: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:43.521: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:43.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:44.520: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:44.520: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:44.520: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:44.524: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:45.544: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:45.544: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:45.544: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:45.547: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:46.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:46.521: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:46.521: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:46.525: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:47.522: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:47.522: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:47.522: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:47.525: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:48.827: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:48.827: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:48.827: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:49.116: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:49.522: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:49.522: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:49.522: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:49.527: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:50.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:50.521: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:50.521: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:50.526: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:51.532: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:51.532: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:51.532: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:51.536: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:52.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:52.521: INFO: Wrong image for pod: daemon-set-lgr7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:52.521: INFO: Pod daemon-set-lgr7z is not available
Jul 20 15:15:52.525: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:53.527: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:53.527: INFO: Pod daemon-set-9cdj7 is not available
Jul 20 15:15:53.530: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:54.526: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:54.526: INFO: Pod daemon-set-9cdj7 is not available
Jul 20 15:15:54.533: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:55.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:55.521: INFO: Pod daemon-set-9cdj7 is not available
Jul 20 15:15:55.525: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:56.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:56.525: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:57.522: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:57.525: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:58.522: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:58.522: INFO: Pod daemon-set-7n65b is not available
Jul 20 15:15:58.526: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:15:59.522: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:15:59.522: INFO: Pod daemon-set-7n65b is not available
Jul 20 15:15:59.527: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:00.522: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:16:00.522: INFO: Pod daemon-set-7n65b is not available
Jul 20 15:16:00.526: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:01.521: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:16:01.521: INFO: Pod daemon-set-7n65b is not available
Jul 20 15:16:01.526: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:02.522: INFO: Wrong image for pod: daemon-set-7n65b. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 20 15:16:02.522: INFO: Pod daemon-set-7n65b is not available
Jul 20 15:16:02.526: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:03.552: INFO: Pod daemon-set-7qhsg is not available
Jul 20 15:16:03.557: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul 20 15:16:03.561: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:03.564: INFO: Number of nodes with available pods: 1
Jul 20 15:16:03.564: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:16:04.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:04.571: INFO: Number of nodes with available pods: 1
Jul 20 15:16:04.571: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:16:05.587: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:05.591: INFO: Number of nodes with available pods: 1
Jul 20 15:16:05.591: INFO: Node kali-worker is running more than one daemon pod
Jul 20 15:16:06.570: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 15:16:06.575: INFO: Number of nodes with available pods: 2
Jul 20 15:16:06.575: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8388, will wait for the garbage collector to delete the pods
Jul 20 15:16:06.648: INFO: Deleting DaemonSet.extensions daemon-set took: 5.902037ms
Jul 20 15:16:06.948: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265135ms
Jul 20 15:16:13.466: INFO: Number of nodes with available pods: 0
Jul 20 15:16:13.466: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 15:16:13.468: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8388/daemonsets","resourceVersion":"2752179"},"items":null}

Jul 20 15:16:13.470: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8388/pods","resourceVersion":"2752179"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:16:13.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8388" for this suite.

• [SLOW TEST:41.055 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":269,"skipped":4547,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:16:13.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-1fd7f4ec-3d32-4696-927d-732dee108b1c
STEP: Creating a pod to test consume configMaps
Jul 20 15:16:13.540: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4" in namespace "projected-2213" to be "Succeeded or Failed"
Jul 20 15:16:13.553: INFO: Pod "pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.021819ms
Jul 20 15:16:15.562: INFO: Pod "pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021588769s
Jul 20 15:16:17.604: INFO: Pod "pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063295963s
STEP: Saw pod success
Jul 20 15:16:17.604: INFO: Pod "pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4" satisfied condition "Succeeded or Failed"
Jul 20 15:16:17.611: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 15:16:17.717: INFO: Waiting for pod pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4 to disappear
Jul 20 15:16:17.722: INFO: Pod pod-projected-configmaps-1166a176-1913-4147-82f4-e577bfe220b4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:16:17.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2213" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4560,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:16:17.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 20 15:16:17.838: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jul 20 15:16:19.886: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:16:20.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5679" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":271,"skipped":4599,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:16:20.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:16:22.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8993" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":272,"skipped":4624,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:16:22.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 20 15:16:23.228: INFO: Waiting up to 5m0s for pod "pod-2244df29-a127-4195-872c-4e668ddbade1" in namespace "emptydir-6244" to be "Succeeded or Failed"
Jul 20 15:16:23.328: INFO: Pod "pod-2244df29-a127-4195-872c-4e668ddbade1": Phase="Pending", Reason="", readiness=false. Elapsed: 100.413796ms
Jul 20 15:16:25.333: INFO: Pod "pod-2244df29-a127-4195-872c-4e668ddbade1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105193764s
Jul 20 15:16:27.340: INFO: Pod "pod-2244df29-a127-4195-872c-4e668ddbade1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111937088s
STEP: Saw pod success
Jul 20 15:16:27.340: INFO: Pod "pod-2244df29-a127-4195-872c-4e668ddbade1" satisfied condition "Succeeded or Failed"
Jul 20 15:16:27.343: INFO: Trying to get logs from node kali-worker2 pod pod-2244df29-a127-4195-872c-4e668ddbade1 container test-container: 
STEP: delete the pod
Jul 20 15:16:27.409: INFO: Waiting for pod pod-2244df29-a127-4195-872c-4e668ddbade1 to disappear
Jul 20 15:16:27.718: INFO: Pod pod-2244df29-a127-4195-872c-4e668ddbade1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:16:27.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6244" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4659,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:16:27.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jul 20 15:16:34.473: INFO: Successfully updated pod "adopt-release-4xr8z"
STEP: Checking that the Job readopts the Pod
Jul 20 15:16:34.473: INFO: Waiting up to 15m0s for pod "adopt-release-4xr8z" in namespace "job-7104" to be "adopted"
Jul 20 15:16:34.476: INFO: Pod "adopt-release-4xr8z": Phase="Running", Reason="", readiness=true. Elapsed: 2.962947ms
Jul 20 15:16:36.480: INFO: Pod "adopt-release-4xr8z": Phase="Running", Reason="", readiness=true. Elapsed: 2.007306908s
Jul 20 15:16:36.480: INFO: Pod "adopt-release-4xr8z" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jul 20 15:16:36.991: INFO: Successfully updated pod "adopt-release-4xr8z"
STEP: Checking that the Job releases the Pod
Jul 20 15:16:36.991: INFO: Waiting up to 15m0s for pod "adopt-release-4xr8z" in namespace "job-7104" to be "released"
Jul 20 15:16:37.035: INFO: Pod "adopt-release-4xr8z": Phase="Running", Reason="", readiness=true. Elapsed: 44.734593ms
Jul 20 15:16:39.095: INFO: Pod "adopt-release-4xr8z": Phase="Running", Reason="", readiness=true. Elapsed: 2.104641668s
Jul 20 15:16:39.095: INFO: Pod "adopt-release-4xr8z" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:16:39.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7104" for this suite.

• [SLOW TEST:11.377 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":274,"skipped":4702,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 20 15:16:39.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0720 15:16:49.485361       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 15:16:49.485: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 20 15:16:49.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3606" for this suite.

• [SLOW TEST:10.389 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":275,"skipped":4712,"failed":0}
SSSSSJul 20 15:16:49.495: INFO: Running AfterSuite actions on all nodes
Jul 20 15:16:49.495: INFO: Running AfterSuite actions on node 1
Jul 20 15:16:49.495: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 6258.077 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS